Test Report: Docker_Linux_crio_arm64 21724

                    
                      360d9e050a05bd2ed6961537be9e77a8ddcd2d56:2025-10-13:41891
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.79
35 TestAddons/parallel/Registry 14.04
36 TestAddons/parallel/RegistryCreds 0.53
37 TestAddons/parallel/Ingress 147.27
38 TestAddons/parallel/InspektorGadget 6.31
39 TestAddons/parallel/MetricsServer 6.4
41 TestAddons/parallel/CSI 40.55
42 TestAddons/parallel/Headlamp 3.6
43 TestAddons/parallel/CloudSpanner 5.35
44 TestAddons/parallel/LocalPath 8.61
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.26
52 TestForceSystemdFlag 517.69
53 TestForceSystemdEnv 511.14
98 TestFunctional/parallel/ServiceCmdConnect 603.49
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.99
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
136 TestFunctional/parallel/ServiceCmd/Format 0.54
137 TestFunctional/parallel/ServiceCmd/URL 0.51
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.29
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
191 TestJSONOutput/pause/Command 1.76
197 TestJSONOutput/unpause/Command 1.63
281 TestPause/serial/Pause 6.15
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.45
305 TestStartStop/group/old-k8s-version/serial/Pause 8.29
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.74
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.63
321 TestStartStop/group/no-preload/serial/Pause 8.7
327 TestStartStop/group/embed-certs/serial/Pause 6.06
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.06
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.31
343 TestStartStop/group/newest-cni/serial/Pause 7.51
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.08
x
+
TestAddons/serial/Volcano (0.79s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable volcano --alsologtostderr -v=1: exit status 11 (792.573637ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:01:49.569033   11081 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:01:49.570506   11081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:01:49.570549   11081 out.go:374] Setting ErrFile to fd 2...
	I1013 21:01:49.570570   11081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:01:49.570868   11081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:01:49.571178   11081 mustload.go:65] Loading cluster: addons-421494
	I1013 21:01:49.571588   11081 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:01:49.571629   11081 addons.go:606] checking whether the cluster is paused
	I1013 21:01:49.571759   11081 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:01:49.571914   11081 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:01:49.572404   11081 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:01:49.612422   11081 ssh_runner.go:195] Run: systemctl --version
	I1013 21:01:49.612479   11081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:01:49.632386   11081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:01:49.734238   11081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:01:49.734370   11081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:01:49.764705   11081 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:01:49.764741   11081 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:01:49.764747   11081 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:01:49.764751   11081 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:01:49.764754   11081 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:01:49.764758   11081 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:01:49.764761   11081 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:01:49.764764   11081 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:01:49.764767   11081 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:01:49.764774   11081 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:01:49.764777   11081 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:01:49.764780   11081 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:01:49.764783   11081 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:01:49.764786   11081 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:01:49.764793   11081 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:01:49.764805   11081 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:01:49.764812   11081 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:01:49.764816   11081 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:01:49.764819   11081 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:01:49.764822   11081 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:01:49.764827   11081 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:01:49.764830   11081 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:01:49.764833   11081 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:01:49.764836   11081 cri.go:89] found id: ""
	I1013 21:01:49.764892   11081 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:01:49.784684   11081 out.go:203] 
	W1013 21:01:49.788419   11081 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:01:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:01:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:01:49.788442   11081 out.go:285] * 
	* 
	W1013 21:01:50.278409   11081 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:01:50.282327   11081 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.79s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.908157ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-5nbln" [41505187-6ea8-4010-80bf-50e2d38aa5e0] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003032304s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-nfn7w" [2d213357-a5ce-4cbc-bcde-d13049d2406e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003325772s
addons_test.go:392: (dbg) Run:  kubectl --context addons-421494 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-421494 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-421494 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.533867829s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 ip
2025/10/13 21:02:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable registry --alsologtostderr -v=1: exit status 11 (250.884143ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:14.753167   11608 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:14.753414   11608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:14.753429   11608 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:14.753437   11608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:14.753712   11608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:14.753980   11608 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:14.754329   11608 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:14.754347   11608 addons.go:606] checking whether the cluster is paused
	I1013 21:02:14.754453   11608 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:14.754472   11608 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:14.754920   11608 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:14.771917   11608 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:14.771970   11608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:14.791373   11608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:14.895527   11608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:14.895627   11608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:14.924162   11608 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:14.924183   11608 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:14.924188   11608 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:14.924192   11608 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:14.924195   11608 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:14.924199   11608 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:14.924203   11608 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:14.924206   11608 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:14.924210   11608 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:14.924219   11608 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:14.924224   11608 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:14.924228   11608 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:14.924232   11608 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:14.924236   11608 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:14.924244   11608 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:14.924254   11608 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:14.924261   11608 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:14.924265   11608 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:14.924269   11608 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:14.924272   11608 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:14.924277   11608 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:14.924280   11608 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:14.924283   11608 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:14.924286   11608 cri.go:89] found id: ""
	I1013 21:02:14.924334   11608 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:14.938790   11608 out.go:203] 
	W1013 21:02:14.941612   11608 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:14.941646   11608 out.go:285] * 
	* 
	W1013 21:02:14.946306   11608 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:14.949239   11608 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.04s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.129303ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-421494
addons_test.go:332: (dbg) Run:  kubectl --context addons-421494 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (265.06462ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:03:01.921157   13609 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:03:01.921357   13609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:03:01.921374   13609 out.go:374] Setting ErrFile to fd 2...
	I1013 21:03:01.921380   13609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:03:01.921718   13609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:03:01.922032   13609 mustload.go:65] Loading cluster: addons-421494
	I1013 21:03:01.922473   13609 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:03:01.922494   13609 addons.go:606] checking whether the cluster is paused
	I1013 21:03:01.922634   13609 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:03:01.922653   13609 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:03:01.923245   13609 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:03:01.941288   13609 ssh_runner.go:195] Run: systemctl --version
	I1013 21:03:01.941439   13609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:03:01.961406   13609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:03:02.066530   13609 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:03:02.066628   13609 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:03:02.098996   13609 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:03:02.099014   13609 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:03:02.099019   13609 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:03:02.099023   13609 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:03:02.099027   13609 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:03:02.099030   13609 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:03:02.099033   13609 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:03:02.099037   13609 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:03:02.099042   13609 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:03:02.099055   13609 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:03:02.099059   13609 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:03:02.099063   13609 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:03:02.099066   13609 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:03:02.099070   13609 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:03:02.099073   13609 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:03:02.099078   13609 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:03:02.099081   13609 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:03:02.099092   13609 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:03:02.099095   13609 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:03:02.099099   13609 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:03:02.099104   13609 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:03:02.099107   13609 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:03:02.099110   13609 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:03:02.099113   13609 cri.go:89] found id: ""
	I1013 21:03:02.099164   13609 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:03:02.114385   13609 out.go:203] 
	W1013 21:03:02.117164   13609 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:03:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:03:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:03:02.117188   13609 out.go:285] * 
	* 
	W1013 21:03:02.121873   13609 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:03:02.124746   13609 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-421494 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-421494 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-421494 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [deabfb39-677e-447b-9ac6-b418f9050311] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [deabfb39-677e-447b-9ac6-b418f9050311] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003698654s
I1013 21:02:49.302404    4299 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.182077045s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-421494 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-421494
helpers_test.go:243: (dbg) docker inspect addons-421494:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512",
	        "Created": "2025-10-13T20:59:17.041522545Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T20:59:17.112130573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512-json.log",
	        "Name": "/addons-421494",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-421494:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-421494",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512",
	                "LowerDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-421494",
	                "Source": "/var/lib/docker/volumes/addons-421494/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-421494",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-421494",
	                "name.minikube.sigs.k8s.io": "addons-421494",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "15ae7685133aacb8b7f906637e00bcddee85eb0d94e5046fe4cc0f0bdbe1664f",
	            "SandboxKey": "/var/run/docker/netns/15ae7685133a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-421494": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:59:19:4d:f8:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "41efece0a838293653cc76ebcfe24b4727fd0e7cae57be2a13c239908efd9641",
	                    "EndpointID": "9db857c7e9e2557894fe6175472e8a9f13efed151d2398df7331def191faecaa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-421494",
	                        "1c1825622e98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-421494 -n addons-421494
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-421494 logs -n 25: (1.490698443s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-875751                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-875751 │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ start   │ --download-only -p binary-mirror-313294 --alsologtostderr --binary-mirror http://127.0.0.1:46681 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-313294   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ delete  │ -p binary-mirror-313294                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-313294   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ addons  │ disable dashboard -p addons-421494                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ addons  │ enable dashboard -p addons-421494                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ start   │ -p addons-421494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 21:01 UTC │
	│ addons  │ addons-421494 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:01 UTC │                     │
	│ addons  │ addons-421494 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ ip      │ addons-421494 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │ 13 Oct 25 21:02 UTC │
	│ addons  │ addons-421494 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ ssh     │ addons-421494 ssh cat /opt/local-path-provisioner/pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │ 13 Oct 25 21:02 UTC │
	│ addons  │ addons-421494 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ enable headlamp -p addons-421494 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ ssh     │ addons-421494 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:03 UTC │                     │
	│ addons  │ addons-421494 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:03 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-421494                                                                                                                                                                                                                                                                                                                                                                                           │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:03 UTC │ 13 Oct 25 21:03 UTC │
	│ addons  │ addons-421494 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:03 UTC │                     │
	│ ip      │ addons-421494 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:04 UTC │ 13 Oct 25 21:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 20:58:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 20:58:51.347613    5057 out.go:360] Setting OutFile to fd 1 ...
	I1013 20:58:51.347825    5057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:58:51.347852    5057 out.go:374] Setting ErrFile to fd 2...
	I1013 20:58:51.347873    5057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:58:51.348172    5057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 20:58:51.348666    5057 out.go:368] Setting JSON to false
	I1013 20:58:51.349449    5057 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2466,"bootTime":1760386666,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 20:58:51.349541    5057 start.go:141] virtualization:  
	I1013 20:58:51.352988    5057 out.go:179] * [addons-421494] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 20:58:51.355985    5057 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 20:58:51.356032    5057 notify.go:220] Checking for updates...
	I1013 20:58:51.361661    5057 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 20:58:51.364593    5057 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 20:58:51.367332    5057 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 20:58:51.370444    5057 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 20:58:51.373239    5057 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 20:58:51.376213    5057 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 20:58:51.396575    5057 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 20:58:51.396692    5057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:58:51.458133    5057 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-13 20:58:51.448947061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:58:51.458238    5057 docker.go:318] overlay module found
	I1013 20:58:51.461220    5057 out.go:179] * Using the docker driver based on user configuration
	I1013 20:58:51.464063    5057 start.go:305] selected driver: docker
	I1013 20:58:51.464092    5057 start.go:925] validating driver "docker" against <nil>
	I1013 20:58:51.464105    5057 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 20:58:51.464834    5057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:58:51.517901    5057 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-13 20:58:51.508772532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:58:51.518071    5057 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 20:58:51.518297    5057 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 20:58:51.521233    5057 out.go:179] * Using Docker driver with root privileges
	I1013 20:58:51.524077    5057 cni.go:84] Creating CNI manager for ""
	I1013 20:58:51.524147    5057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:58:51.524157    5057 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 20:58:51.524237    5057 start.go:349] cluster config:
	{Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1013 20:58:51.527362    5057 out.go:179] * Starting "addons-421494" primary control-plane node in "addons-421494" cluster
	I1013 20:58:51.530197    5057 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 20:58:51.533121    5057 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 20:58:51.536082    5057 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 20:58:51.536159    5057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:58:51.536195    5057 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 20:58:51.536207    5057 cache.go:58] Caching tarball of preloaded images
	I1013 20:58:51.536285    5057 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 20:58:51.536299    5057 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 20:58:51.536658    5057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/config.json ...
	I1013 20:58:51.536684    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/config.json: {Name:mk2741074136a1d96fd52bb31764367dd6839187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:58:51.553011    5057 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 20:58:51.553149    5057 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1013 20:58:51.553172    5057 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1013 20:58:51.553177    5057 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1013 20:58:51.553185    5057 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1013 20:58:51.553194    5057 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1013 20:59:09.227967    5057 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1013 20:59:09.228009    5057 cache.go:232] Successfully downloaded all kic artifacts
	I1013 20:59:09.228037    5057 start.go:360] acquireMachinesLock for addons-421494: {Name:mke133de16fa3a5dbff16f3894584bfb771c3296 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 20:59:09.228171    5057 start.go:364] duration metric: took 114.049µs to acquireMachinesLock for "addons-421494"
	I1013 20:59:09.228202    5057 start.go:93] Provisioning new machine with config: &{Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 20:59:09.228275    5057 start.go:125] createHost starting for "" (driver="docker")
	I1013 20:59:09.231545    5057 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1013 20:59:09.231769    5057 start.go:159] libmachine.API.Create for "addons-421494" (driver="docker")
	I1013 20:59:09.231832    5057 client.go:168] LocalClient.Create starting
	I1013 20:59:09.231955    5057 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 20:59:09.597058    5057 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 20:59:09.963914    5057 cli_runner.go:164] Run: docker network inspect addons-421494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 20:59:09.979759    5057 cli_runner.go:211] docker network inspect addons-421494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 20:59:09.979861    5057 network_create.go:284] running [docker network inspect addons-421494] to gather additional debugging logs...
	I1013 20:59:09.979882    5057 cli_runner.go:164] Run: docker network inspect addons-421494
	W1013 20:59:09.994370    5057 cli_runner.go:211] docker network inspect addons-421494 returned with exit code 1
	I1013 20:59:09.994401    5057 network_create.go:287] error running [docker network inspect addons-421494]: docker network inspect addons-421494: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-421494 not found
	I1013 20:59:09.994414    5057 network_create.go:289] output of [docker network inspect addons-421494]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-421494 not found
	
	** /stderr **
	I1013 20:59:09.994507    5057 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 20:59:10.021050    5057 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f4770}
	I1013 20:59:10.021092    5057 network_create.go:124] attempt to create docker network addons-421494 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1013 20:59:10.021167    5057 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-421494 addons-421494
	I1013 20:59:10.086023    5057 network_create.go:108] docker network addons-421494 192.168.49.0/24 created
	I1013 20:59:10.086066    5057 kic.go:121] calculated static IP "192.168.49.2" for the "addons-421494" container
	I1013 20:59:10.086212    5057 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 20:59:10.104025    5057 cli_runner.go:164] Run: docker volume create addons-421494 --label name.minikube.sigs.k8s.io=addons-421494 --label created_by.minikube.sigs.k8s.io=true
	I1013 20:59:10.124043    5057 oci.go:103] Successfully created a docker volume addons-421494
	I1013 20:59:10.124147    5057 cli_runner.go:164] Run: docker run --rm --name addons-421494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-421494 --entrypoint /usr/bin/test -v addons-421494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 20:59:12.578924    5057 cli_runner.go:217] Completed: docker run --rm --name addons-421494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-421494 --entrypoint /usr/bin/test -v addons-421494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.454736981s)
	I1013 20:59:12.578955    5057 oci.go:107] Successfully prepared a docker volume addons-421494
	I1013 20:59:12.578976    5057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:59:12.578994    5057 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 20:59:12.579061    5057 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-421494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 20:59:16.970989    5057 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-421494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.391883746s)
	I1013 20:59:16.971023    5057 kic.go:203] duration metric: took 4.392025356s to extract preloaded images to volume ...
	W1013 20:59:16.971164    5057 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 20:59:16.971277    5057 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 20:59:17.026126    5057 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-421494 --name addons-421494 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-421494 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-421494 --network addons-421494 --ip 192.168.49.2 --volume addons-421494:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 20:59:17.361153    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Running}}
	I1013 20:59:17.382881    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:17.407022    5057 cli_runner.go:164] Run: docker exec addons-421494 stat /var/lib/dpkg/alternatives/iptables
	I1013 20:59:17.456980    5057 oci.go:144] the created container "addons-421494" has a running status.
	I1013 20:59:17.457009    5057 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa...
	I1013 20:59:18.394964    5057 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 20:59:18.414536    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:18.432746    5057 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 20:59:18.432769    5057 kic_runner.go:114] Args: [docker exec --privileged addons-421494 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 20:59:18.470258    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:18.488178    5057 machine.go:93] provisionDockerMachine start ...
	I1013 20:59:18.488276    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:18.504255    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:18.504586    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:18.504603    5057 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 20:59:18.505184    5057 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 20:59:21.647182    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-421494
	
	I1013 20:59:21.647278    5057 ubuntu.go:182] provisioning hostname "addons-421494"
	I1013 20:59:21.647360    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:21.664740    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:21.665047    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:21.665062    5057 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-421494 && echo "addons-421494" | sudo tee /etc/hostname
	I1013 20:59:21.816376    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-421494
	
	I1013 20:59:21.816452    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:21.833875    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:21.834182    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:21.834211    5057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-421494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-421494/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-421494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 20:59:21.975715    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 20:59:21.975740    5057 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 20:59:21.975800    5057 ubuntu.go:190] setting up certificates
	I1013 20:59:21.975812    5057 provision.go:84] configureAuth start
	I1013 20:59:21.975871    5057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-421494
	I1013 20:59:21.992483    5057 provision.go:143] copyHostCerts
	I1013 20:59:21.992562    5057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 20:59:21.992695    5057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 20:59:21.992760    5057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 20:59:21.992844    5057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.addons-421494 san=[127.0.0.1 192.168.49.2 addons-421494 localhost minikube]
	I1013 20:59:22.877383    5057 provision.go:177] copyRemoteCerts
	I1013 20:59:22.877450    5057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 20:59:22.877515    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:22.893825    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:22.994770    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 20:59:23.012553    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 20:59:23.029591    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 20:59:23.045667    5057 provision.go:87] duration metric: took 1.069831765s to configureAuth
	I1013 20:59:23.045695    5057 ubuntu.go:206] setting minikube options for container-runtime
	I1013 20:59:23.045872    5057 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 20:59:23.045981    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.062516    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:23.062828    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:23.062848    5057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 20:59:23.306556    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 20:59:23.306575    5057 machine.go:96] duration metric: took 4.818377495s to provisionDockerMachine
	I1013 20:59:23.306585    5057 client.go:171] duration metric: took 14.074740166s to LocalClient.Create
	I1013 20:59:23.306597    5057 start.go:167] duration metric: took 14.074828746s to libmachine.API.Create "addons-421494"
	I1013 20:59:23.306604    5057 start.go:293] postStartSetup for "addons-421494" (driver="docker")
	I1013 20:59:23.306614    5057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 20:59:23.306671    5057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 20:59:23.306711    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.323696    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.423342    5057 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 20:59:23.426401    5057 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 20:59:23.426427    5057 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 20:59:23.426438    5057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 20:59:23.426498    5057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 20:59:23.426519    5057 start.go:296] duration metric: took 119.908986ms for postStartSetup
	I1013 20:59:23.426819    5057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-421494
	I1013 20:59:23.445621    5057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/config.json ...
	I1013 20:59:23.445891    5057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 20:59:23.445929    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.462794    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.560811    5057 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 20:59:23.565504    5057 start.go:128] duration metric: took 14.337214231s to createHost
	I1013 20:59:23.565525    5057 start.go:83] releasing machines lock for "addons-421494", held for 14.337339438s
	I1013 20:59:23.565595    5057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-421494
	I1013 20:59:23.582524    5057 ssh_runner.go:195] Run: cat /version.json
	I1013 20:59:23.582574    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.582840    5057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 20:59:23.582892    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.599859    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.607980    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.787558    5057 ssh_runner.go:195] Run: systemctl --version
	I1013 20:59:23.793579    5057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 20:59:23.828661    5057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 20:59:23.832708    5057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 20:59:23.832785    5057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 20:59:23.860045    5057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 20:59:23.860111    5057 start.go:495] detecting cgroup driver to use...
	I1013 20:59:23.860149    5057 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 20:59:23.860199    5057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 20:59:23.876954    5057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 20:59:23.889208    5057 docker.go:218] disabling cri-docker service (if available) ...
	I1013 20:59:23.889270    5057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 20:59:23.906273    5057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 20:59:23.923570    5057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 20:59:24.031041    5057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 20:59:24.156867    5057 docker.go:234] disabling docker service ...
	I1013 20:59:24.156929    5057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 20:59:24.175868    5057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 20:59:24.188355    5057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 20:59:24.310522    5057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 20:59:24.424135    5057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 20:59:24.436184    5057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 20:59:24.448846    5057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 20:59:24.448915    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.456690    5057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 20:59:24.456801    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.464707    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.472238    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.479967    5057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 20:59:24.487314    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.494925    5057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.506899    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.515690    5057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 20:59:24.522848    5057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 20:59:24.522924    5057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 20:59:24.536343    5057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 20:59:24.543820    5057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 20:59:24.649781    5057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 20:59:24.772857    5057 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 20:59:24.772936    5057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 20:59:24.776417    5057 start.go:563] Will wait 60s for crictl version
	I1013 20:59:24.776475    5057 ssh_runner.go:195] Run: which crictl
	I1013 20:59:24.779592    5057 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 20:59:24.804884    5057 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 20:59:24.805012    5057 ssh_runner.go:195] Run: crio --version
	I1013 20:59:24.835696    5057 ssh_runner.go:195] Run: crio --version
	I1013 20:59:24.868481    5057 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 20:59:24.871218    5057 cli_runner.go:164] Run: docker network inspect addons-421494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 20:59:24.894013    5057 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 20:59:24.897517    5057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 20:59:24.906956    5057 kubeadm.go:883] updating cluster {Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 20:59:24.907077    5057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:59:24.907135    5057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 20:59:24.939322    5057 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 20:59:24.939343    5057 crio.go:433] Images already preloaded, skipping extraction
	I1013 20:59:24.939397    5057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 20:59:24.963970    5057 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 20:59:24.963992    5057 cache_images.go:85] Images are preloaded, skipping loading
	I1013 20:59:24.964000    5057 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1013 20:59:24.964091    5057 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-421494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 20:59:24.964175    5057 ssh_runner.go:195] Run: crio config
	I1013 20:59:25.017391    5057 cni.go:84] Creating CNI manager for ""
	I1013 20:59:25.017416    5057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:59:25.017463    5057 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 20:59:25.017497    5057 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-421494 NodeName:addons-421494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 20:59:25.017719    5057 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-421494"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 20:59:25.017815    5057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 20:59:25.025866    5057 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 20:59:25.025976    5057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 20:59:25.033605    5057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 20:59:25.046739    5057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 20:59:25.058865    5057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1013 20:59:25.070903    5057 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 20:59:25.074367    5057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 20:59:25.083833    5057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 20:59:25.209873    5057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 20:59:25.224550    5057 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494 for IP: 192.168.49.2
	I1013 20:59:25.224612    5057 certs.go:195] generating shared ca certs ...
	I1013 20:59:25.224642    5057 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.224786    5057 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 20:59:25.591047    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt ...
	I1013 20:59:25.591077    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt: {Name:mk8d9df9f97f37a0e7946e483b0cf0cab6dca92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.591263    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key ...
	I1013 20:59:25.591275    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key: {Name:mk61b8d997f2b410c27c4783c8cf57f766b1ba78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.591367    5057 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 20:59:25.927229    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt ...
	I1013 20:59:25.927252    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt: {Name:mk69437ac22a94b21039b7f8a2ae52550cf27a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.927399    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key ...
	I1013 20:59:25.927406    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key: {Name:mk44f1041f8c214a497a0fd3fdfa68d761f9a861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.927469    5057 certs.go:257] generating profile certs ...
	I1013 20:59:25.927521    5057 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.key
	I1013 20:59:25.927533    5057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt with IP's: []
	I1013 20:59:26.150839    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt ...
	I1013 20:59:26.150870    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: {Name:mk81ace3874509099a8b83d36f63ec14297cef29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.151061    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.key ...
	I1013 20:59:26.151074    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.key: {Name:mk6dafd4878763af63ae414731b4047cb774d060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.151153    5057 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec
	I1013 20:59:26.151173    5057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1013 20:59:26.511930    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec ...
	I1013 20:59:26.511963    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec: {Name:mkc8c14d78802ad91717649469d7012488c7e448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.512146    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec ...
	I1013 20:59:26.512159    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec: {Name:mk4321121bc9ea7dec0507db2c366785884a723d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.512243    5057 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt
	I1013 20:59:26.512326    5057 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key
	I1013 20:59:26.512381    5057 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key
	I1013 20:59:26.512400    5057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt with IP's: []
	I1013 20:59:26.816031    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt ...
	I1013 20:59:26.816058    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt: {Name:mkc55fe7b19842ba0f74e6abe8297181c9f920a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.816229    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key ...
	I1013 20:59:26.816241    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key: {Name:mk8fc4bb5d8f5d7f52726b9c3b46816d61ded9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.816430    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 20:59:26.816481    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 20:59:26.816514    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 20:59:26.816541    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 20:59:26.817100    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 20:59:26.834971    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 20:59:26.852405    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 20:59:26.870674    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 20:59:26.887008    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 20:59:26.903524    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 20:59:26.920034    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 20:59:26.936182    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 20:59:26.952111    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 20:59:26.968451    5057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 20:59:26.980753    5057 ssh_runner.go:195] Run: openssl version
	I1013 20:59:26.986630    5057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 20:59:26.994893    5057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 20:59:26.998214    5057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 20:59:26.998275    5057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 20:59:27.039128    5057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 20:59:27.047095    5057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 20:59:27.050241    5057 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 20:59:27.050287    5057 kubeadm.go:400] StartCluster: {Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 20:59:27.050365    5057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 20:59:27.050434    5057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 20:59:27.075355    5057 cri.go:89] found id: ""
	I1013 20:59:27.075481    5057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 20:59:27.082928    5057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 20:59:27.090106    5057 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 20:59:27.090188    5057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 20:59:27.097693    5057 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 20:59:27.097712    5057 kubeadm.go:157] found existing configuration files:
	
	I1013 20:59:27.097761    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 20:59:27.104892    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 20:59:27.104977    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 20:59:27.111718    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 20:59:27.118842    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 20:59:27.118904    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 20:59:27.125611    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 20:59:27.132489    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 20:59:27.132546    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 20:59:27.139068    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 20:59:27.145877    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 20:59:27.145934    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 20:59:27.152457    5057 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 20:59:27.192696    5057 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 20:59:27.192769    5057 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 20:59:27.217832    5057 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 20:59:27.217940    5057 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 20:59:27.217981    5057 kubeadm.go:318] OS: Linux
	I1013 20:59:27.218043    5057 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 20:59:27.218108    5057 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 20:59:27.218172    5057 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 20:59:27.218243    5057 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 20:59:27.218311    5057 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 20:59:27.218373    5057 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 20:59:27.218433    5057 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 20:59:27.218498    5057 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 20:59:27.218553    5057 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 20:59:27.283268    5057 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 20:59:27.283400    5057 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 20:59:27.283506    5057 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 20:59:27.290373    5057 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 20:59:27.296663    5057 out.go:252]   - Generating certificates and keys ...
	I1013 20:59:27.296779    5057 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 20:59:27.296862    5057 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 20:59:27.464961    5057 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 20:59:29.295969    5057 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 20:59:29.824044    5057 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 20:59:30.362277    5057 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 20:59:31.596921    5057 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 20:59:31.597213    5057 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-421494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 20:59:32.927724    5057 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 20:59:32.928065    5057 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-421494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 20:59:33.248589    5057 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 20:59:33.417158    5057 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 20:59:33.630091    5057 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 20:59:33.630367    5057 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 20:59:34.147513    5057 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 20:59:34.732438    5057 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 20:59:35.119123    5057 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 20:59:35.738292    5057 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 20:59:36.033106    5057 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 20:59:36.033744    5057 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 20:59:36.036529    5057 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 20:59:36.040176    5057 out.go:252]   - Booting up control plane ...
	I1013 20:59:36.040298    5057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 20:59:36.040386    5057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 20:59:36.040462    5057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 20:59:36.056488    5057 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 20:59:36.056621    5057 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 20:59:36.064086    5057 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 20:59:36.070062    5057 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 20:59:36.070501    5057 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 20:59:36.200083    5057 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 20:59:36.200208    5057 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 20:59:37.201418    5057 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001652111s
	I1013 20:59:37.204814    5057 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 20:59:37.204911    5057 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1013 20:59:37.205240    5057 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 20:59:37.205338    5057 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 20:59:41.036576    5057 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.831345254s
	I1013 20:59:43.060978    5057 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.856081814s
	I1013 20:59:43.706461    5057 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501519283s
	I1013 20:59:43.728815    5057 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 20:59:43.741178    5057 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 20:59:43.756276    5057 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 20:59:43.756499    5057 kubeadm.go:318] [mark-control-plane] Marking the node addons-421494 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 20:59:43.768562    5057 kubeadm.go:318] [bootstrap-token] Using token: b7wyk6.mqwpjdody0hqmiej
	I1013 20:59:43.771511    5057 out.go:252]   - Configuring RBAC rules ...
	I1013 20:59:43.771639    5057 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 20:59:43.775924    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 20:59:43.786549    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 20:59:43.795358    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 20:59:43.799205    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 20:59:43.803827    5057 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 20:59:44.114737    5057 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 20:59:44.550248    5057 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 20:59:45.132476    5057 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 20:59:45.132528    5057 kubeadm.go:318] 
	I1013 20:59:45.132639    5057 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 20:59:45.132669    5057 kubeadm.go:318] 
	I1013 20:59:45.132754    5057 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 20:59:45.132759    5057 kubeadm.go:318] 
	I1013 20:59:45.132791    5057 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 20:59:45.132857    5057 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 20:59:45.132911    5057 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 20:59:45.132916    5057 kubeadm.go:318] 
	I1013 20:59:45.132981    5057 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 20:59:45.132987    5057 kubeadm.go:318] 
	I1013 20:59:45.133045    5057 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 20:59:45.133052    5057 kubeadm.go:318] 
	I1013 20:59:45.133107    5057 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 20:59:45.133185    5057 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 20:59:45.133257    5057 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 20:59:45.133262    5057 kubeadm.go:318] 
	I1013 20:59:45.133350    5057 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 20:59:45.133431    5057 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 20:59:45.133437    5057 kubeadm.go:318] 
	I1013 20:59:45.133526    5057 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token b7wyk6.mqwpjdody0hqmiej \
	I1013 20:59:45.133634    5057 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 20:59:45.133656    5057 kubeadm.go:318] 	--control-plane 
	I1013 20:59:45.133662    5057 kubeadm.go:318] 
	I1013 20:59:45.133751    5057 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 20:59:45.133756    5057 kubeadm.go:318] 
	I1013 20:59:45.133842    5057 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token b7wyk6.mqwpjdody0hqmiej \
	I1013 20:59:45.133949    5057 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 20:59:45.154864    5057 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 20:59:45.156095    5057 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 20:59:45.156251    5057 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 20:59:45.156280    5057 cni.go:84] Creating CNI manager for ""
	I1013 20:59:45.156289    5057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:59:45.164740    5057 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 20:59:45.170362    5057 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 20:59:45.183872    5057 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 20:59:45.183894    5057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 20:59:45.205217    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 20:59:45.611860    5057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 20:59:45.611989    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:45.612052    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-421494 minikube.k8s.io/updated_at=2025_10_13T20_59_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-421494 minikube.k8s.io/primary=true
	I1013 20:59:45.741785    5057 ops.go:34] apiserver oom_adj: -16
	I1013 20:59:45.741884    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:46.242149    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:46.742694    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:47.242153    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:47.742096    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:48.242904    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:48.742152    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:49.242078    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:49.335227    5057 kubeadm.go:1113] duration metric: took 3.723284316s to wait for elevateKubeSystemPrivileges
	I1013 20:59:49.335257    5057 kubeadm.go:402] duration metric: took 22.284972143s to StartCluster
	I1013 20:59:49.335280    5057 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:49.335401    5057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 20:59:49.335745    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:49.335954    5057 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 20:59:49.336084    5057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 20:59:49.336298    5057 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 20:59:49.336337    5057 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 20:59:49.336418    5057 addons.go:69] Setting yakd=true in profile "addons-421494"
	I1013 20:59:49.336436    5057 addons.go:238] Setting addon yakd=true in "addons-421494"
	I1013 20:59:49.336463    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.336918    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.337256    5057 addons.go:69] Setting inspektor-gadget=true in profile "addons-421494"
	I1013 20:59:49.337277    5057 addons.go:238] Setting addon inspektor-gadget=true in "addons-421494"
	I1013 20:59:49.337324    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.337749    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.339582    5057 addons.go:69] Setting metrics-server=true in profile "addons-421494"
	I1013 20:59:49.339614    5057 addons.go:238] Setting addon metrics-server=true in "addons-421494"
	I1013 20:59:49.339638    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.340084    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.341538    5057 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-421494"
	I1013 20:59:49.341573    5057 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-421494"
	I1013 20:59:49.341611    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.342038    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.352650    5057 addons.go:69] Setting cloud-spanner=true in profile "addons-421494"
	I1013 20:59:49.352676    5057 addons.go:69] Setting registry=true in profile "addons-421494"
	I1013 20:59:49.352697    5057 addons.go:238] Setting addon registry=true in "addons-421494"
	I1013 20:59:49.352706    5057 addons.go:238] Setting addon cloud-spanner=true in "addons-421494"
	I1013 20:59:49.352733    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.352744    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.353196    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.353233    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.367811    5057 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-421494"
	I1013 20:59:49.367975    5057 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-421494"
	I1013 20:59:49.368033    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.368788    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.369050    5057 addons.go:69] Setting registry-creds=true in profile "addons-421494"
	I1013 20:59:49.369115    5057 addons.go:238] Setting addon registry-creds=true in "addons-421494"
	I1013 20:59:49.369190    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.369856    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.372247    5057 addons.go:69] Setting storage-provisioner=true in profile "addons-421494"
	I1013 20:59:49.372312    5057 addons.go:238] Setting addon storage-provisioner=true in "addons-421494"
	I1013 20:59:49.372352    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.372920    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.378775    5057 addons.go:69] Setting default-storageclass=true in profile "addons-421494"
	I1013 20:59:49.378864    5057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-421494"
	I1013 20:59:49.379508    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.398716    5057 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-421494"
	I1013 20:59:49.398757    5057 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-421494"
	I1013 20:59:49.399713    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.399750    5057 addons.go:69] Setting volcano=true in profile "addons-421494"
	I1013 20:59:49.399770    5057 addons.go:238] Setting addon volcano=true in "addons-421494"
	I1013 20:59:49.399821    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.400250    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.404020    5057 addons.go:69] Setting gcp-auth=true in profile "addons-421494"
	I1013 20:59:49.404052    5057 mustload.go:65] Loading cluster: addons-421494
	I1013 20:59:49.404246    5057 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 20:59:49.404493    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.426549    5057 addons.go:69] Setting ingress=true in profile "addons-421494"
	I1013 20:59:49.426582    5057 addons.go:238] Setting addon ingress=true in "addons-421494"
	I1013 20:59:49.426628    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.427113    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.428349    5057 addons.go:69] Setting volumesnapshots=true in profile "addons-421494"
	I1013 20:59:49.428381    5057 addons.go:238] Setting addon volumesnapshots=true in "addons-421494"
	I1013 20:59:49.428411    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.428854    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.447179    5057 addons.go:69] Setting ingress-dns=true in profile "addons-421494"
	I1013 20:59:49.447211    5057 addons.go:238] Setting addon ingress-dns=true in "addons-421494"
	I1013 20:59:49.447329    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.447914    5057 out.go:179] * Verifying Kubernetes components...
	I1013 20:59:49.352650    5057 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-421494"
	I1013 20:59:49.447990    5057 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-421494"
	I1013 20:59:49.448011    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.448373    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.447923    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.467316    5057 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 20:59:49.470186    5057 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 20:59:49.470209    5057 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 20:59:49.470270    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.486094    5057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 20:59:49.486271    5057 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 20:59:49.493607    5057 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 20:59:49.493880    5057 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 20:59:49.496288    5057 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 20:59:49.498649    5057 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 20:59:49.498667    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 20:59:49.498720    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.504261    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 20:59:49.504290    5057 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 20:59:49.504347    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.507511    5057 addons.go:238] Setting addon default-storageclass=true in "addons-421494"
	I1013 20:59:49.507625    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.509011    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.511662    5057 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 20:59:49.511687    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 20:59:49.511762    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.496479    5057 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 20:59:49.539553    5057 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 20:59:49.539880    5057 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 20:59:49.545581    5057 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 20:59:49.545604    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 20:59:49.545669    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.545876    5057 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 20:59:49.545898    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 20:59:49.545970    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.496484    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 20:59:49.496522    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 20:59:49.559734    5057 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 20:59:49.559860    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.563866    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.599920    5057 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 20:59:49.603969    5057 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 20:59:49.604036    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 20:59:49.604123    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.614717    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 20:59:49.620424    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 20:59:49.623305    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 20:59:49.628726    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 20:59:49.633068    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 20:59:49.639123    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W1013 20:59:49.642155    5057 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1013 20:59:49.654003    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 20:59:49.656677    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 20:59:49.656705    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 20:59:49.656775    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.684153    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 20:59:49.690189    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 20:59:49.701558    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 20:59:49.704446    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 20:59:49.704467    5057 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 20:59:49.704550    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.732581    5057 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 20:59:49.732734    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 20:59:49.735760    5057 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 20:59:49.736039    5057 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 20:59:49.736070    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 20:59:49.736172    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.756466    5057 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 20:59:49.756539    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 20:59:49.756617    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.765618    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.766274    5057 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 20:59:49.766291    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 20:59:49.766349    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.779119    5057 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 20:59:49.779146    5057 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 20:59:49.779201    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.789353    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.798740    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.799479    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.799871    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.825995    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.827577    5057 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-421494"
	I1013 20:59:49.827615    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.828144    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.841736    5057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 20:59:49.894299    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.907307    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.927640    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.931896    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.936246    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.958617    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.964895    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.972401    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	W1013 20:59:49.980586    5057 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 20:59:49.980642    5057 retry.go:31] will retry after 359.598785ms: ssh: handshake failed: EOF
	I1013 20:59:49.999581    5057 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 20:59:50.002782    5057 out.go:179]   - Using image docker.io/busybox:stable
	I1013 20:59:50.003402    5057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 20:59:50.006016    5057 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 20:59:50.006098    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 20:59:50.006189    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:50.039903    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:50.503477    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 20:59:50.615732    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 20:59:50.687352    5057 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 20:59:50.687372    5057 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 20:59:50.706176    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 20:59:50.706254    5057 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 20:59:50.731826    5057 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:50.731896    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 20:59:50.742869    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 20:59:50.742948    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 20:59:50.752677    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 20:59:50.789619    5057 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 20:59:50.789689    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 20:59:50.825217    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 20:59:50.835742    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 20:59:50.835922    5057 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 20:59:50.851258    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:50.882536    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 20:59:50.885854    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 20:59:50.885929    5057 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 20:59:50.891680    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 20:59:50.953987    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 20:59:50.965194    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 20:59:50.965267    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 20:59:50.972975    5057 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 20:59:50.973042    5057 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 20:59:50.997042    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 20:59:51.045664    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 20:59:51.051166    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 20:59:51.051245    5057 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 20:59:51.070211    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 20:59:51.070287    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 20:59:51.102107    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 20:59:51.102181    5057 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 20:59:51.250186    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 20:59:51.273234    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 20:59:51.273301    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 20:59:51.321439    5057 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 20:59:51.321516    5057 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 20:59:51.360897    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 20:59:51.360975    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 20:59:51.402728    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 20:59:51.427986    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 20:59:51.490165    5057 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 20:59:51.490191    5057 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 20:59:51.562434    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 20:59:51.562459    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 20:59:51.780433    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 20:59:51.780505    5057 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 20:59:51.881984    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 20:59:51.882063    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 20:59:52.063526    5057 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 20:59:52.063601    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 20:59:52.149040    5057 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.307260035s)
	I1013 20:59:52.149120    5057 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1013 20:59:52.149597    5057 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.146136986s)
	I1013 20:59:52.150224    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.646658653s)
	I1013 20:59:52.151353    5057 node_ready.go:35] waiting up to 6m0s for node "addons-421494" to be "Ready" ...
	I1013 20:59:52.215766    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 20:59:52.215849    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 20:59:52.358054    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 20:59:52.395361    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 20:59:52.395439    5057 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 20:59:52.564835    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 20:59:52.564903    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 20:59:52.655735    5057 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-421494" context rescaled to 1 replicas
	I1013 20:59:52.777918    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 20:59:52.777986    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 20:59:52.935474    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 20:59:52.935550    5057 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 20:59:53.136334    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 20:59:53.882662    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.129907574s)
	I1013 20:59:53.882785    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.057506904s)
	I1013 20:59:53.882826    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.267068811s)
	I1013 20:59:54.236870    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.385536165s)
	W1013 20:59:54.236966    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:54.237002    5057 retry.go:31] will retry after 300.493465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:54.236939    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.35421062s)
	W1013 20:59:54.289675    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 20:59:54.538071    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:55.747693    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.750579999s)
	I1013 20:59:55.747824    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.702083772s)
	I1013 20:59:55.748068    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.497814125s)
	I1013 20:59:55.748253    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.345414679s)
	I1013 20:59:55.748299    5057 addons.go:479] Verifying addon metrics-server=true in "addons-421494"
	I1013 20:59:55.748355    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.320299617s)
	I1013 20:59:55.748570    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.856822066s)
	I1013 20:59:55.748616    5057 addons.go:479] Verifying addon ingress=true in "addons-421494"
	I1013 20:59:55.747637    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.793573239s)
	I1013 20:59:55.748714    5057 addons.go:479] Verifying addon registry=true in "addons-421494"
	I1013 20:59:55.751982    5057 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-421494 service yakd-dashboard -n yakd-dashboard
	
	I1013 20:59:55.752104    5057 out.go:179] * Verifying ingress addon...
	I1013 20:59:55.752164    5057 out.go:179] * Verifying registry addon...
	I1013 20:59:55.757176    5057 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 20:59:55.757242    5057 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 20:59:55.837413    5057 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 20:59:55.837433    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:55.837636    5057 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 20:59:55.837643    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 20:59:55.910348    5057 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1013 20:59:56.050793    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.692641596s)
	W1013 20:59:56.050895    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 20:59:56.050940    5057 retry.go:31] will retry after 313.050139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 20:59:56.264264    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:56.264723    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:56.364992    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 20:59:56.428797    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.292353629s)
	I1013 20:59:56.428882    5057 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-421494"
	I1013 20:59:56.429178    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.890945312s)
	W1013 20:59:56.429229    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:56.429285    5057 retry.go:31] will retry after 493.833301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:56.431891    5057 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 20:59:56.435954    5057 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 20:59:56.446089    5057 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 20:59:56.446171    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 20:59:56.654207    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 20:59:56.760906    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:56.761352    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:56.923806    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:56.940220    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:57.174860    5057 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 20:59:57.174946    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:57.221519    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:57.266196    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:57.266420    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:57.356222    5057 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 20:59:57.371917    5057 addons.go:238] Setting addon gcp-auth=true in "addons-421494"
	I1013 20:59:57.371964    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:57.372406    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:57.401392    5057 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 20:59:57.401456    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:57.431981    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:57.439911    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:57.761537    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:57.761922    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:57.939851    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:58.260827    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:58.261069    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:58.439007    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 20:59:58.654997    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 20:59:58.763229    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:58.763386    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:58.939498    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:59.043421    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.678336488s)
	I1013 20:59:59.043617    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.119770306s)
	W1013 20:59:59.043657    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:59.043692    5057 retry.go:31] will retry after 825.929978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:59.043758    5057 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.642344797s)
	I1013 20:59:59.046966    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 20:59:59.049723    5057 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 20:59:59.052505    5057 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 20:59:59.052529    5057 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 20:59:59.065182    5057 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 20:59:59.065205    5057 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 20:59:59.077376    5057 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 20:59:59.077397    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 20:59:59.090300    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 20:59:59.264975    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:59.265814    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:59.445420    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:59.571435    5057 addons.go:479] Verifying addon gcp-auth=true in "addons-421494"
	I1013 20:59:59.576301    5057 out.go:179] * Verifying gcp-auth addon...
	I1013 20:59:59.579977    5057 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 20:59:59.585481    5057 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 20:59:59.585506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 20:59:59.760633    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:59.760975    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:59.870361    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:59.940200    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:00.107483    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:00.287795    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:00.299725    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:00.464917    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:00.599927    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:00.656235    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:00.769962    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:00.774520    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:00.954313    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:01.094964    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:01.262964    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:01.266216    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:01.482571    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:01.598343    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:01.689391    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.818980689s)
	W1013 21:00:01.689499    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:01.689551    5057 retry.go:31] will retry after 532.966944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:01.789336    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:01.789714    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:01.944371    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:02.084731    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:02.223405    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:02.262787    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:02.263439    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:02.440196    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:02.583583    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:02.762130    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:02.763166    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:02.940220    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 21:00:03.083991    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:03.084020    5057 retry.go:31] will retry after 1.670594067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:03.086323    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:03.155048    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:03.261458    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:03.261597    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:03.439612    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:03.583558    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:03.761860    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:03.762390    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:03.939654    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:04.083669    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:04.260967    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:04.261159    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:04.439593    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:04.583622    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:04.754852    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:04.765913    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:04.766641    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:04.939993    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:05.083633    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:05.155202    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:05.262206    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:05.269742    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:05.440345    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:05.583137    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:05.626999    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:05.627069    5057 retry.go:31] will retry after 2.48892018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:05.761529    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:05.761779    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:05.939386    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:06.083233    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:06.260659    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:06.261593    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:06.438838    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:06.582569    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:06.761526    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:06.761711    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:06.939912    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:07.083592    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:07.261621    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:07.261750    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:07.439858    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:07.583473    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:07.654173    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:07.760379    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:07.760572    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:07.939553    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:08.083424    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:08.116580    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:08.262759    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:08.263295    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:08.439875    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:08.583310    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:08.761518    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:08.761995    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:00:08.928518    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:08.928551    5057 retry.go:31] will retry after 2.44024344s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:08.939471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:09.083260    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:09.262453    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:09.263120    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:09.439210    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:09.584653    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:09.654524    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:09.761141    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:09.761450    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:09.939118    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:10.083112    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:10.260738    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:10.260949    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:10.439132    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:10.583395    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:10.761676    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:10.761894    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:10.940025    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:11.084208    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:11.261334    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:11.261631    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:11.368986    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:11.440622    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:11.584199    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:11.655417    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:11.761382    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:11.761744    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:11.939790    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:12.083417    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:12.185065    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:12.185147    5057 retry.go:31] will retry after 5.307813202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:12.261267    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:12.261447    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:12.440088    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:12.582888    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:12.760666    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:12.760960    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:12.938735    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:13.083254    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:13.261119    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:13.261397    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:13.439391    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:13.583337    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:13.761640    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:13.761691    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:13.939495    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:14.083861    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:14.154335    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:14.260741    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:14.261335    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:14.439822    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:14.582684    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:14.760485    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:14.760942    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:14.939818    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:15.083175    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:15.261715    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:15.261809    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:15.440164    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:15.583140    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:15.761236    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:15.761424    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:15.939415    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:16.083371    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:16.261277    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:16.261410    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:16.440055    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:16.582740    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:16.654386    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:16.760580    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:16.760731    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:16.939436    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:17.083412    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:17.260905    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:17.261038    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:17.438908    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:17.494012    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:17.583286    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:17.762394    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:17.762959    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:17.939906    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:18.083979    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:18.262133    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:18.262645    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:00:18.316027    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:18.316057    5057 retry.go:31] will retry after 6.056174972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:18.439232    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:18.583064    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:18.654972    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:18.760953    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:18.761270    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:18.940120    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:19.083173    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:19.263620    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:19.263889    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:19.439597    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:19.583850    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:19.760746    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:19.761116    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:19.939182    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:20.084095    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:20.261396    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:20.261825    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:20.439477    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:20.583210    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:20.761080    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:20.762206    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:20.939089    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:21.083109    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:21.154829    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:21.261505    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:21.261959    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:21.438965    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:21.583048    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:21.760801    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:21.760918    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:21.939423    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:22.083030    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:22.261028    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:22.261203    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:22.439324    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:22.583224    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:22.761195    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:22.761337    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:22.939330    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:23.083151    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:23.260707    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:23.261206    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:23.439812    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:23.583447    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:23.653874    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:23.761416    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:23.761980    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:23.939086    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:24.083089    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:24.261506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:24.261738    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:24.372531    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:24.439514    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:24.583954    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:24.763239    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:24.763560    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:24.939106    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:25.084215    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:25.218565    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:25.218593    5057 retry.go:31] will retry after 6.386486728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:25.260833    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:25.261722    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:25.439654    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:25.583736    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:25.654334    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:25.760842    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:25.761014    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:25.938942    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:26.082895    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:26.260755    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:26.260849    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:26.438797    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:26.583735    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:26.760503    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:26.760846    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:26.938729    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:27.083922    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:27.261279    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:27.261860    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:27.438846    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:27.583646    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:27.654702    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:27.760933    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:27.761361    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:27.939321    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:28.083455    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:28.260962    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:28.261019    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:28.438936    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:28.583740    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:28.761251    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:28.761613    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:28.939322    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:29.083337    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:29.263313    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:29.263408    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:29.439615    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:29.582905    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:29.761185    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:29.761356    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:29.939481    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:30.084361    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:30.154995    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:30.261567    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:30.261634    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:30.439463    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:30.583503    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:30.760042    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:30.760662    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:30.939740    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:31.083668    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:31.260528    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:31.261023    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:31.469740    5057 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 21:00:31.469767    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:31.587269    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:31.605554    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:31.745832    5057 node_ready.go:49] node "addons-421494" is "Ready"
	I1013 21:00:31.745863    5057 node_ready.go:38] duration metric: took 39.594450036s for node "addons-421494" to be "Ready" ...
	I1013 21:00:31.745885    5057 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:00:31.745942    5057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:00:31.781014    5057 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 21:00:31.781041    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:31.781392    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:31.945361    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:32.092332    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:32.262545    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:32.262688    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:32.439869    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:32.585554    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:32.764430    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:32.765310    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:32.956283    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:33.083203    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:33.261768    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:33.261866    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:33.288297    5057 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.542322546s)
	I1013 21:00:33.288328    5057 api_server.go:72] duration metric: took 43.952343109s to wait for apiserver process to appear ...
	I1013 21:00:33.288334    5057 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:00:33.288350    5057 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1013 21:00:33.289122    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.683519475s)
	W1013 21:00:33.289171    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:33.289192    5057 retry.go:31] will retry after 9.2630868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:33.296625    5057 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1013 21:00:33.298198    5057 api_server.go:141] control plane version: v1.34.1
	I1013 21:00:33.298227    5057 api_server.go:131] duration metric: took 9.883222ms to wait for apiserver health ...
	I1013 21:00:33.298235    5057 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:00:33.302180    5057 system_pods.go:59] 19 kube-system pods found
	I1013 21:00:33.302212    5057 system_pods.go:61] "coredns-66bc5c9577-zfn57" [2a4119f9-1325-459c-b331-e9e2f946ca94] Running
	I1013 21:00:33.302222    5057 system_pods.go:61] "csi-hostpath-attacher-0" [63ba0966-f0f0-4f2e-a04f-8cc0d6e38857] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:00:33.302229    5057 system_pods.go:61] "csi-hostpath-resizer-0" [412ef547-052e-4b6a-bef2-8a89277fc6cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:00:33.302249    5057 system_pods.go:61] "csi-hostpathplugin-c6mtm" [9179db86-4876-478d-8469-82c3b0a2b7dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:00:33.302259    5057 system_pods.go:61] "etcd-addons-421494" [e0231175-9578-4f4b-bc9c-3219db42e926] Running
	I1013 21:00:33.302264    5057 system_pods.go:61] "kindnet-vz77r" [43fa0e44-0713-4797-b4f0-22127befb175] Running
	I1013 21:00:33.302269    5057 system_pods.go:61] "kube-apiserver-addons-421494" [6bd64ad7-7a1b-4364-a814-c958df98b58d] Running
	I1013 21:00:33.302274    5057 system_pods.go:61] "kube-controller-manager-addons-421494" [21ea2dae-cb9d-4e3d-9bd5-d8d7150998de] Running
	I1013 21:00:33.302286    5057 system_pods.go:61] "kube-ingress-dns-minikube" [f6967331-ef1c-461a-95e8-89133a75c3ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:00:33.302290    5057 system_pods.go:61] "kube-proxy-zrcq6" [cab0a945-0c0d-497f-8ada-c7b45dabc7fa] Running
	I1013 21:00:33.302296    5057 system_pods.go:61] "kube-scheduler-addons-421494" [77f214aa-809f-4322-8c48-b508fe196867] Running
	I1013 21:00:33.302309    5057 system_pods.go:61] "metrics-server-85b7d694d7-hrqb8" [496e3426-b9d3-4219-ba0d-ab73c596e817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:00:33.302324    5057 system_pods.go:61] "nvidia-device-plugin-daemonset-lswkm" [09e2dc90-684c-40b7-ad9c-333959dc27fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:00:33.302337    5057 system_pods.go:61] "registry-66898fdd98-5nbln" [41505187-6ea8-4010-80bf-50e2d38aa5e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:00:33.302345    5057 system_pods.go:61] "registry-creds-764b6fb674-f5gvj" [e6126817-d300-48b3-a682-ebad0a32e077] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:00:33.302352    5057 system_pods.go:61] "registry-proxy-nfn7w" [2d213357-a5ce-4cbc-bcde-d13049d2406e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:00:33.302360    5057 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6cm8c" [5211d95d-039f-4476-a2af-de0bae933a16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.302372    5057 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9phdb" [17c48797-1ee2-46d3-98a1-1b6f33762c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.302378    5057 system_pods.go:61] "storage-provisioner" [681cc811-8bdb-4841-b1f8-3fc44fb6b5c4] Running
	I1013 21:00:33.302393    5057 system_pods.go:74] duration metric: took 4.146533ms to wait for pod list to return data ...
	I1013 21:00:33.302406    5057 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:00:33.306923    5057 default_sa.go:45] found service account: "default"
	I1013 21:00:33.306957    5057 default_sa.go:55] duration metric: took 4.545145ms for default service account to be created ...
	I1013 21:00:33.306967    5057 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:00:33.312303    5057 system_pods.go:86] 19 kube-system pods found
	I1013 21:00:33.312332    5057 system_pods.go:89] "coredns-66bc5c9577-zfn57" [2a4119f9-1325-459c-b331-e9e2f946ca94] Running
	I1013 21:00:33.312350    5057 system_pods.go:89] "csi-hostpath-attacher-0" [63ba0966-f0f0-4f2e-a04f-8cc0d6e38857] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:00:33.312356    5057 system_pods.go:89] "csi-hostpath-resizer-0" [412ef547-052e-4b6a-bef2-8a89277fc6cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:00:33.312363    5057 system_pods.go:89] "csi-hostpathplugin-c6mtm" [9179db86-4876-478d-8469-82c3b0a2b7dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:00:33.312368    5057 system_pods.go:89] "etcd-addons-421494" [e0231175-9578-4f4b-bc9c-3219db42e926] Running
	I1013 21:00:33.312374    5057 system_pods.go:89] "kindnet-vz77r" [43fa0e44-0713-4797-b4f0-22127befb175] Running
	I1013 21:00:33.312382    5057 system_pods.go:89] "kube-apiserver-addons-421494" [6bd64ad7-7a1b-4364-a814-c958df98b58d] Running
	I1013 21:00:33.312386    5057 system_pods.go:89] "kube-controller-manager-addons-421494" [21ea2dae-cb9d-4e3d-9bd5-d8d7150998de] Running
	I1013 21:00:33.312397    5057 system_pods.go:89] "kube-ingress-dns-minikube" [f6967331-ef1c-461a-95e8-89133a75c3ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:00:33.312402    5057 system_pods.go:89] "kube-proxy-zrcq6" [cab0a945-0c0d-497f-8ada-c7b45dabc7fa] Running
	I1013 21:00:33.312407    5057 system_pods.go:89] "kube-scheduler-addons-421494" [77f214aa-809f-4322-8c48-b508fe196867] Running
	I1013 21:00:33.312425    5057 system_pods.go:89] "metrics-server-85b7d694d7-hrqb8" [496e3426-b9d3-4219-ba0d-ab73c596e817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:00:33.312439    5057 system_pods.go:89] "nvidia-device-plugin-daemonset-lswkm" [09e2dc90-684c-40b7-ad9c-333959dc27fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:00:33.312445    5057 system_pods.go:89] "registry-66898fdd98-5nbln" [41505187-6ea8-4010-80bf-50e2d38aa5e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:00:33.312459    5057 system_pods.go:89] "registry-creds-764b6fb674-f5gvj" [e6126817-d300-48b3-a682-ebad0a32e077] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:00:33.312464    5057 system_pods.go:89] "registry-proxy-nfn7w" [2d213357-a5ce-4cbc-bcde-d13049d2406e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:00:33.312470    5057 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6cm8c" [5211d95d-039f-4476-a2af-de0bae933a16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.312480    5057 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9phdb" [17c48797-1ee2-46d3-98a1-1b6f33762c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.312495    5057 system_pods.go:89] "storage-provisioner" [681cc811-8bdb-4841-b1f8-3fc44fb6b5c4] Running
	I1013 21:00:33.312503    5057 system_pods.go:126] duration metric: took 5.529621ms to wait for k8s-apps to be running ...
	I1013 21:00:33.312514    5057 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:00:33.312576    5057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:00:33.325544    5057 system_svc.go:56] duration metric: took 13.021443ms WaitForService to wait for kubelet
	I1013 21:00:33.325582    5057 kubeadm.go:586] duration metric: took 43.989595404s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:00:33.325605    5057 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:00:33.328893    5057 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 21:00:33.328936    5057 node_conditions.go:123] node cpu capacity is 2
	I1013 21:00:33.328958    5057 node_conditions.go:105] duration metric: took 3.34771ms to run NodePressure ...
	I1013 21:00:33.328975    5057 start.go:241] waiting for startup goroutines ...
	I1013 21:00:33.440060    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:33.584251    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:33.761258    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:33.761411    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:33.939336    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:34.083476    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:34.261009    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:34.261206    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:34.438966    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:34.582635    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:34.760755    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:34.760813    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:34.940908    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:35.083820    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:35.263224    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:35.263398    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:35.440272    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:35.583079    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:35.762808    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:35.763469    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:35.939866    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:36.083419    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:36.262711    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:36.263198    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:36.442196    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:36.583298    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:36.762680    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:36.763143    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:36.942960    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:37.085138    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:37.263604    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:37.264056    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:37.441664    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:37.585947    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:37.767737    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:37.768149    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:37.940757    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:38.085639    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:38.264261    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:38.264534    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:38.442253    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:38.592352    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:38.762133    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:38.762568    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:38.955845    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:39.082937    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:39.262866    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:39.263276    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:39.439093    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:39.582707    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:39.761367    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:39.761739    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:39.939557    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:40.091157    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:40.269721    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:40.270996    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:40.439413    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:40.583922    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:40.763568    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:40.763718    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:40.947701    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:41.084202    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:41.260480    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:41.261961    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:41.439541    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:41.584301    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:41.761697    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:41.761845    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:41.940200    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:42.084138    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:42.262084    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:42.262335    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:42.439245    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:42.552536    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:42.583606    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:42.762779    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:42.762993    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:42.939770    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:43.083977    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:43.261574    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:43.262125    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:43.438962    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:43.582870    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:43.701605    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149034673s)
	W1013 21:00:43.701637    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:43.701655    5057 retry.go:31] will retry after 11.850607072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:43.761508    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:43.761742    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:43.940547    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:44.083735    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:44.261343    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:44.261460    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:44.440265    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:44.583260    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:44.761533    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:44.761685    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:44.942074    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:45.084952    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:45.267303    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:45.268631    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:45.441702    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:45.583755    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:45.762405    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:45.762490    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:45.941093    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:46.083625    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:46.262474    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:46.262644    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:46.440331    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:46.583497    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:46.762138    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:46.762684    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:46.940496    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:47.083626    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:47.262305    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:47.262695    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:47.440602    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:47.583690    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:47.762755    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:47.763506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:47.940318    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:48.084156    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:48.261405    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:48.261808    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:48.440237    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:48.582849    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:48.761018    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:48.761143    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:48.940438    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:49.083417    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:49.270403    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:49.271169    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:49.440490    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:49.586546    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:49.765860    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:49.766301    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:49.947884    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:50.090789    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:50.261805    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:50.262060    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:50.440147    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:50.583390    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:50.762274    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:50.767429    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:50.939939    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:51.085150    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:51.261883    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:51.262355    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:51.440481    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:51.583298    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:51.761720    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:51.762804    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:51.941817    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:52.088424    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:52.261879    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:52.262223    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:52.439209    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:52.582813    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:52.761888    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:52.762019    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:52.939501    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:53.083683    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:53.261852    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:53.262022    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:53.439325    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:53.594268    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:53.762471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:53.762637    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:53.940700    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:54.083880    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:54.264317    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:54.264434    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:54.439699    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:54.583691    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:54.761009    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:54.761623    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:54.940276    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:55.086999    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:55.262913    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:55.263375    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:55.439840    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:55.553159    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:55.583229    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:55.761471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:55.761643    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:55.939775    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:56.083566    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:56.265430    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:56.265528    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:56.439250    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:56.583858    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:56.728163    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.174966934s)
	W1013 21:00:56.728241    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:56.728325    5057 retry.go:31] will retry after 44.855996818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:56.761240    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:56.761766    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:56.940385    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:57.084096    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:57.260793    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:57.261711    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:57.439643    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:57.583528    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:57.764943    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:57.765195    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:57.939448    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:58.083760    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:58.262152    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:58.262261    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:58.439127    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:58.582954    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:58.761917    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:58.762001    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:58.939384    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:59.083890    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:59.264205    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:59.264331    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:59.439641    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:59.583845    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:59.762280    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:59.763003    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:59.944178    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:00.093202    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:00.339567    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:00.339943    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:00.442338    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:00.583523    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:00.761483    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:00.762409    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:00.940455    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:01.084077    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:01.272773    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:01.274519    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:01.440073    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:01.582997    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:01.763017    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:01.763165    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:01.939319    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:02.083273    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:02.261479    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:02.261665    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:02.439621    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:02.583549    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:02.761406    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:02.762453    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:02.939770    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:03.084456    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:03.262289    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:03.262840    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:03.441014    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:03.583038    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:03.762628    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:03.763092    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:03.940519    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:04.083638    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:04.262780    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:04.263338    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:04.439948    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:04.585604    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:04.761891    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:04.762426    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:04.940662    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:05.084352    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:05.263040    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:05.263585    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:05.440350    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:05.583269    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:05.761718    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:05.762775    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:05.939968    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:06.083163    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:06.260754    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:06.261538    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:06.440094    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:06.583481    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:06.761324    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:06.761452    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:06.940042    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:07.084422    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:07.262931    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:07.263246    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:07.444358    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:07.583727    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:07.762399    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:07.762593    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:07.939649    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:08.083384    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:08.261192    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:08.262362    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:08.439558    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:08.583288    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:08.761785    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:08.761969    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:08.939473    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:09.084073    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:09.265791    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:09.270576    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:09.440269    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:09.583310    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:09.761426    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:09.761990    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:09.939106    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:10.083671    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:10.261511    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:10.261745    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:10.448659    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:10.584097    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:10.762783    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:10.763114    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:10.940939    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:11.083065    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:11.261537    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:11.261712    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:11.439082    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:11.582795    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:11.761573    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:11.762761    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:11.940421    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:12.083679    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:12.262643    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:12.263442    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:12.440294    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:12.588883    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:12.761710    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:12.761791    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:12.939721    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:13.083814    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:13.264407    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:13.264648    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:13.439695    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:13.583742    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:13.763119    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:13.763282    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:13.939750    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:14.083723    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:14.261347    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:14.261495    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:14.439712    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:14.583902    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:14.761443    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:14.761707    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:14.939909    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:15.085108    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:15.262307    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:15.262471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:15.439848    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:15.583304    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:15.761840    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:15.762111    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:15.939547    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:16.083532    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:16.261358    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:16.261548    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:16.440469    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:16.583463    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:16.761637    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:16.762909    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:16.941204    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:17.083012    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:17.261142    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:17.261315    5057 kapi.go:107] duration metric: took 1m21.504069547s to wait for kubernetes.io/minikube-addons=registry ...
	I1013 21:01:17.439565    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:17.583502    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:17.760660    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:17.939939    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:18.083878    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:18.262355    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:18.439773    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:18.583899    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:18.761450    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:18.940386    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:19.083541    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:19.265562    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:19.439755    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:19.582963    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:19.761533    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:19.940139    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:20.083720    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:20.261114    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:20.439837    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:20.584286    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:20.760577    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:20.940327    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:21.083306    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:21.260790    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:21.440002    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:21.583058    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:21.761602    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:21.940400    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:22.083669    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:22.261142    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:22.441965    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:22.582972    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:22.761681    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:22.938891    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:23.082902    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:23.262575    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:23.440846    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:23.583805    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:23.760880    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:23.940802    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:24.084053    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:24.261262    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:24.440573    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:24.584981    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:24.761150    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:24.940210    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:25.083941    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:25.261050    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:25.439665    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:25.583936    5057 kapi.go:107] duration metric: took 1m26.003958301s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 21:01:25.586965    5057 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-421494 cluster.
	I1013 21:01:25.589996    5057 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 21:01:25.592908    5057 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 21:01:25.761596    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:25.939709    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:26.261220    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:26.439368    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:26.761723    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:26.940096    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:27.260248    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:27.439630    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:27.760888    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:27.938921    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:28.259940    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:28.439288    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:28.760204    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:28.939212    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:29.262774    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:29.439945    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:29.760759    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:29.939702    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:30.260779    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:30.438923    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:30.760729    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:30.940387    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:31.261491    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:31.440320    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:31.761041    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:31.939249    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:32.260170    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:32.439892    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:32.760363    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:32.940125    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:33.260863    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:33.438865    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:33.761238    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:33.959209    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:34.266138    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:34.439909    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:34.761570    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:34.940331    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:35.260738    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:35.442743    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:35.761271    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:35.940056    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:36.260337    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:36.440506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:36.760325    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:36.940183    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:37.267571    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:37.440289    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:37.760952    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:37.939800    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:38.261613    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:38.440861    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:38.761379    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:38.940289    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:39.263188    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:39.439238    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:39.761835    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:39.938793    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:40.261193    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:40.438872    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:40.761578    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:40.939900    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:41.263412    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:41.440242    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:41.584524    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:01:41.762393    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:41.940429    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:42.262242    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:42.440169    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:42.760231    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:42.855620    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271060813s)
	W1013 21:01:42.855657    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:01:42.855735    5057 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 21:01:42.939970    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:43.262083    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:43.450621    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:43.761974    5057 kapi.go:107] duration metric: took 1m48.004797279s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 21:01:43.940517    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:44.440295    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:44.940391    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:45.441364    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:45.940275    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:46.439453    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:46.942852    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:47.440025    5057 kapi.go:107] duration metric: took 1m51.004076946s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 21:01:47.441517    5057 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1013 21:01:47.442677    5057 addons.go:514] duration metric: took 1m58.106325975s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1013 21:01:47.442720    5057 start.go:246] waiting for cluster config update ...
	I1013 21:01:47.442742    5057 start.go:255] writing updated cluster config ...
	I1013 21:01:47.443049    5057 ssh_runner.go:195] Run: rm -f paused
	I1013 21:01:47.446548    5057 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:01:47.449724    5057 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zfn57" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.455811    5057 pod_ready.go:94] pod "coredns-66bc5c9577-zfn57" is "Ready"
	I1013 21:01:47.455838    5057 pod_ready.go:86] duration metric: took 6.091487ms for pod "coredns-66bc5c9577-zfn57" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.459430    5057 pod_ready.go:83] waiting for pod "etcd-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.466921    5057 pod_ready.go:94] pod "etcd-addons-421494" is "Ready"
	I1013 21:01:47.466948    5057 pod_ready.go:86] duration metric: took 7.494514ms for pod "etcd-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.472173    5057 pod_ready.go:83] waiting for pod "kube-apiserver-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.476928    5057 pod_ready.go:94] pod "kube-apiserver-addons-421494" is "Ready"
	I1013 21:01:47.476956    5057 pod_ready.go:86] duration metric: took 4.757448ms for pod "kube-apiserver-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.479122    5057 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.850502    5057 pod_ready.go:94] pod "kube-controller-manager-addons-421494" is "Ready"
	I1013 21:01:47.850531    5057 pod_ready.go:86] duration metric: took 371.37726ms for pod "kube-controller-manager-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:48.050879    5057 pod_ready.go:83] waiting for pod "kube-proxy-zrcq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:48.451018    5057 pod_ready.go:94] pod "kube-proxy-zrcq6" is "Ready"
	I1013 21:01:48.451047    5057 pod_ready.go:86] duration metric: took 400.09032ms for pod "kube-proxy-zrcq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:48.650177    5057 pod_ready.go:83] waiting for pod "kube-scheduler-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:49.050340    5057 pod_ready.go:94] pod "kube-scheduler-addons-421494" is "Ready"
	I1013 21:01:49.050368    5057 pod_ready.go:86] duration metric: took 400.117412ms for pod "kube-scheduler-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:49.050380    5057 pod_ready.go:40] duration metric: took 1.603801539s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:01:49.465802    5057 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 21:01:49.474495    5057 out.go:179] * Done! kubectl is now configured to use "addons-421494" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:04:59 addons-421494 crio[834]: time="2025-10-13T21:04:59.110039332Z" level=info msg="Removed container 53e2bca6aed5e00f021353744eccb12e18472141e10c67610a42d0f2a7ed89e1: kube-system/registry-creds-764b6fb674-f5gvj/registry-creds" id=aad65452-4b15-4b51-95c7-180a543b7a25 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 21:04:59 addons-421494 crio[834]: time="2025-10-13T21:04:59.993310427Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-8t2lp/POD" id=6a849e78-d420-4b9d-b4c5-94dda3df0788 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 21:04:59 addons-421494 crio[834]: time="2025-10-13T21:04:59.993380301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.01264757Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8t2lp Namespace:default ID:fc13ac94e0ce8ebc4f1a21129c97bbbc28e9843c4b95ae9703276c4d782c7eeb UID:7fa312c7-3394-49a1-9398-6f96f88a589e NetNS:/var/run/netns/493f0d72-eab3-4484-8e93-4e372c397e36 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400167e668}] Aliases:map[]}"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.012844267Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-8t2lp to CNI network \"kindnet\" (type=ptp)"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.123730974Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-8t2lp Namespace:default ID:fc13ac94e0ce8ebc4f1a21129c97bbbc28e9843c4b95ae9703276c4d782c7eeb UID:7fa312c7-3394-49a1-9398-6f96f88a589e NetNS:/var/run/netns/493f0d72-eab3-4484-8e93-4e372c397e36 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400167e668}] Aliases:map[]}"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.124397575Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-8t2lp for CNI network kindnet (type=ptp)"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.154093664Z" level=info msg="Ran pod sandbox fc13ac94e0ce8ebc4f1a21129c97bbbc28e9843c4b95ae9703276c4d782c7eeb with infra container: default/hello-world-app-5d498dc89-8t2lp/POD" id=6a849e78-d420-4b9d-b4c5-94dda3df0788 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.166762372Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=961eba5d-e35d-49ce-bd82-60e0a49807c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.167107242Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=961eba5d-e35d-49ce-bd82-60e0a49807c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.167497132Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=961eba5d-e35d-49ce-bd82-60e0a49807c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.174727382Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0b6e6a6c-4eed-41e8-8a1d-b7ad26bb6e26 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.19625278Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.872422305Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=0b6e6a6c-4eed-41e8-8a1d-b7ad26bb6e26 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.873047742Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d936b053-eb5f-4772-9c5d-d6c4e7ee7fa6 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.880354782Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cb705d82-23b8-4094-b48c-94055734ad58 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.886210477Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-8t2lp/hello-world-app" id=2a1db68e-641c-42f2-b61e-bbb697e76c15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.886954262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.894194195Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.895185219Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/990ae70772e6a1e742e65fc10d2a3b6f439fd49b9d7d6385dece592b78b3ce87/merged/etc/passwd: no such file or directory"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.895303862Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/990ae70772e6a1e742e65fc10d2a3b6f439fd49b9d7d6385dece592b78b3ce87/merged/etc/group: no such file or directory"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.895653679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.923767365Z" level=info msg="Created container 295ebe7375a7452d428567aff02fbf82cd1262cb08db425fb5ac5f9356427f8d: default/hello-world-app-5d498dc89-8t2lp/hello-world-app" id=2a1db68e-641c-42f2-b61e-bbb697e76c15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.929694968Z" level=info msg="Starting container: 295ebe7375a7452d428567aff02fbf82cd1262cb08db425fb5ac5f9356427f8d" id=75666434-c84d-4f60-821a-081f3ecf4fe7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:05:00 addons-421494 crio[834]: time="2025-10-13T21:05:00.932002818Z" level=info msg="Started container" PID=7282 containerID=295ebe7375a7452d428567aff02fbf82cd1262cb08db425fb5ac5f9356427f8d description=default/hello-world-app-5d498dc89-8t2lp/hello-world-app id=75666434-c84d-4f60-821a-081f3ecf4fe7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc13ac94e0ce8ebc4f1a21129c97bbbc28e9843c4b95ae9703276c4d782c7eeb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	295ebe7375a74       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   fc13ac94e0ce8       hello-world-app-5d498dc89-8t2lp            default
	0e6df95932d1b       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             3 seconds ago            Exited              registry-creds                           2                   121caeffd17c5       registry-creds-764b6fb674-f5gvj            kube-system
	6c233c2f46477       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   d0b75c394d731       nginx                                      default
	c186e226fe1ef       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   f4bf60cf7a06d       busybox                                    default
	ebb997c7d79d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   5899fac2669ce       csi-hostpathplugin-c6mtm                   kube-system
	0c7386ac64481       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   5899fac2669ce       csi-hostpathplugin-c6mtm                   kube-system
	313e4250764b6       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   5899fac2669ce       csi-hostpathplugin-c6mtm                   kube-system
	5231e1c4699c5       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             3 minutes ago            Running             controller                               0                   acb65586c38b4       ingress-nginx-controller-9cc49f96f-bgsnp   ingress-nginx
	4666db466f3f8       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   5899fac2669ce       csi-hostpathplugin-c6mtm                   kube-system
	ace4823451e27       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             3 minutes ago            Exited              patch                                    3                   a959df44da424       ingress-nginx-admission-patch-vjwq4        ingress-nginx
	26d51c5b89e19       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   5899fac2669ce       csi-hostpathplugin-c6mtm                   kube-system
	7e0e137602daf       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   50852bf51bcf7       gadget-lrgtr                               gadget
	6c7397833e400       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   9f6a9ba826692       gcp-auth-78565c9fb4-vt59h                  gcp-auth
	cdb0ef6db7620       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   c8ff66f314323       cloud-spanner-emulator-86bd5cbb97-zldmh    default
	181410ca5fe49       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   ca67a652a7da2       registry-proxy-nfn7w                       kube-system
	38812285e7c22       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   a03b81729604e       snapshot-controller-7d9fbc56b8-6cm8c       kube-system
	18ed9fad96827       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           3 minutes ago            Running             registry                                 0                   1420fb4c7b171       registry-66898fdd98-5nbln                  kube-system
	2f837ddcec93c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   8aa3e833d2426       yakd-dashboard-5ff678cb9-fz2dg             yakd-dashboard
	e469acf690df4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   3 minutes ago            Exited              create                                   0                   2a35515cb5491       ingress-nginx-admission-create-97mkt       ingress-nginx
	9ba5c620ce249       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   5899fac2669ce       csi-hostpathplugin-c6mtm                   kube-system
	d970bcf470d76       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   baff4b682a58f       nvidia-device-plugin-daemonset-lswkm       kube-system
	b5baee2b95e6c       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   63e045eaf2eb1       local-path-provisioner-648f6765c9-w6x97    local-path-storage
	ba960f407a05a       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   28e9126ec7b8a       csi-hostpath-resizer-0                     kube-system
	d94a39038ca93       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   0a6f4b6a787fc       kube-ingress-dns-minikube                  kube-system
	5842ed1dd0727       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   cf4c8347acee7       snapshot-controller-7d9fbc56b8-9phdb       kube-system
	fa6943addc3e3       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   3f075035cbcf2       metrics-server-85b7d694d7-hrqb8            kube-system
	af2a904ce7f6b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   4ef516ee13ef7       csi-hostpath-attacher-0                    kube-system
	24771ab281e11       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   3e46513d7c680       storage-provisioner                        kube-system
	99d43d6662679       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   f03aa3db10341       coredns-66bc5c9577-zfn57                   kube-system
	056c2dbfb314d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   ab6f569d07586       kindnet-vz77r                              kube-system
	99ba07ab68f8f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   65e0a8010aa46       kube-proxy-zrcq6                           kube-system
	b69697c681afb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   119e677171527       etcd-addons-421494                         kube-system
	3cef779926c40       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   4b3de415d2da3       kube-controller-manager-addons-421494      kube-system
	65658d48b6c6a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   f5382ce7e516a       kube-apiserver-addons-421494               kube-system
	7eaba707b03a1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   6d881a4e14f60       kube-scheduler-addons-421494               kube-system
	
	
	==> coredns [99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8] <==
	[INFO] 10.244.0.7:38652 - 41791 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002179492s
	[INFO] 10.244.0.7:38652 - 38567 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000101905s
	[INFO] 10.244.0.7:38652 - 19488 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000096244s
	[INFO] 10.244.0.7:54239 - 6604 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130664s
	[INFO] 10.244.0.7:54239 - 6391 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179154s
	[INFO] 10.244.0.7:42761 - 30747 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104572s
	[INFO] 10.244.0.7:42761 - 30542 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066386s
	[INFO] 10.244.0.7:41334 - 2764 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082525s
	[INFO] 10.244.0.7:41334 - 2335 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071161s
	[INFO] 10.244.0.7:41934 - 13130 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001847914s
	[INFO] 10.244.0.7:41934 - 13343 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002010257s
	[INFO] 10.244.0.7:41550 - 10640 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127193s
	[INFO] 10.244.0.7:41550 - 10812 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116641s
	[INFO] 10.244.0.19:49606 - 16711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00069577s
	[INFO] 10.244.0.19:39506 - 21828 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167864s
	[INFO] 10.244.0.19:43523 - 53182 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108026s
	[INFO] 10.244.0.19:33116 - 42110 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138622s
	[INFO] 10.244.0.19:38774 - 7284 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116116s
	[INFO] 10.244.0.19:44516 - 31416 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092929s
	[INFO] 10.244.0.19:49065 - 13340 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002962866s
	[INFO] 10.244.0.19:54874 - 57865 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002840038s
	[INFO] 10.244.0.19:59174 - 32170 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003098682s
	[INFO] 10.244.0.19:56378 - 46432 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001833096s
	[INFO] 10.244.0.23:41842 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000187515s
	[INFO] 10.244.0.23:36305 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148131s
	
	
	==> describe nodes <==
	Name:               addons-421494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-421494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-421494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T20_59_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-421494
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-421494"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 20:59:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-421494
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:05:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:04:51 +0000   Mon, 13 Oct 2025 20:59:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:04:51 +0000   Mon, 13 Oct 2025 20:59:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:04:51 +0000   Mon, 13 Oct 2025 20:59:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:04:51 +0000   Mon, 13 Oct 2025 21:00:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-421494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e8b39055d61497394f2cbb9c0725abf
	  System UUID:                f096a897-2137-4c9b-a2f8-e9d35211479f
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	  default                     cloud-spanner-emulator-86bd5cbb97-zldmh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  default                     hello-world-app-5d498dc89-8t2lp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-lrgtr                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  gcp-auth                    gcp-auth-78565c9fb4-vt59h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-bgsnp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m7s
	  kube-system                 coredns-66bc5c9577-zfn57                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m12s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpathplugin-c6mtm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-addons-421494                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m19s
	  kube-system                 kindnet-vz77r                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m13s
	  kube-system                 kube-apiserver-addons-421494                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-controller-manager-addons-421494       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-zrcq6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-addons-421494                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 metrics-server-85b7d694d7-hrqb8             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m8s
	  kube-system                 nvidia-device-plugin-daemonset-lswkm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 registry-66898fdd98-5nbln                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 registry-creds-764b6fb674-f5gvj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 registry-proxy-nfn7w                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 snapshot-controller-7d9fbc56b8-6cm8c        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-9phdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  local-path-storage          local-path-provisioner-648f6765c9-w6x97     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-fz2dg              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m11s                  kube-proxy       
	  Warning  CgroupV1                 5m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m25s (x8 over 5m26s)  kubelet          Node addons-421494 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m25s (x8 over 5m26s)  kubelet          Node addons-421494 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m25s (x8 over 5m26s)  kubelet          Node addons-421494 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m18s                  kubelet          Node addons-421494 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s                  kubelet          Node addons-421494 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s                  kubelet          Node addons-421494 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m14s                  node-controller  Node addons-421494 event: Registered Node addons-421494 in Controller
	  Normal   NodeReady                4m31s                  kubelet          Node addons-421494 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015096] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497062] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032757] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.728511] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.553238] kauditd_printk_skb: 36 callbacks suppressed
	[Oct13 20:59] overlayfs: idmapped layers are currently not supported
	[  +0.065201] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad] <==
	{"level":"warn","ts":"2025-10-13T20:59:40.556959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.568902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.592976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.627952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.655707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.665805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.703547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.732884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.761379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.788540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.849797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.851065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.882582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.909305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.925271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.988248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:41.006734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:41.047721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:41.144749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:56.689074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:56.696568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:18.945584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:18.970200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:19.009679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:19.026022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58834","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [6c7397833e4002498f6710737345e773d2b044cae6bf0947da0148b393468546] <==
	2025/10/13 21:01:24 GCP Auth Webhook started!
	2025/10/13 21:01:50 Ready to marshal response ...
	2025/10/13 21:01:50 Ready to write response ...
	2025/10/13 21:01:50 Ready to marshal response ...
	2025/10/13 21:01:50 Ready to write response ...
	2025/10/13 21:01:50 Ready to marshal response ...
	2025/10/13 21:01:50 Ready to write response ...
	2025/10/13 21:02:11 Ready to marshal response ...
	2025/10/13 21:02:11 Ready to write response ...
	2025/10/13 21:02:12 Ready to marshal response ...
	2025/10/13 21:02:12 Ready to write response ...
	2025/10/13 21:02:12 Ready to marshal response ...
	2025/10/13 21:02:12 Ready to write response ...
	2025/10/13 21:02:20 Ready to marshal response ...
	2025/10/13 21:02:20 Ready to write response ...
	2025/10/13 21:02:36 Ready to marshal response ...
	2025/10/13 21:02:36 Ready to write response ...
	2025/10/13 21:02:37 Ready to marshal response ...
	2025/10/13 21:02:37 Ready to write response ...
	2025/10/13 21:02:52 Ready to marshal response ...
	2025/10/13 21:02:52 Ready to write response ...
	2025/10/13 21:04:59 Ready to marshal response ...
	2025/10/13 21:04:59 Ready to write response ...
	
	
	==> kernel <==
	 21:05:02 up 47 min,  0 user,  load average: 0.30, 0.80, 0.47
	Linux addons-421494 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570] <==
	I1013 21:03:00.920560       1 main.go:301] handling current node
	I1013 21:03:10.927899       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:03:10.927948       1 main.go:301] handling current node
	I1013 21:03:20.926509       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:03:20.926540       1 main.go:301] handling current node
	I1013 21:03:30.920931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:03:30.920981       1 main.go:301] handling current node
	I1013 21:03:40.919907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:03:40.919940       1 main.go:301] handling current node
	I1013 21:03:50.928062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:03:50.928168       1 main.go:301] handling current node
	I1013 21:04:00.927003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:04:00.927034       1 main.go:301] handling current node
	I1013 21:04:10.919914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:04:10.920014       1 main.go:301] handling current node
	I1013 21:04:20.929049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:04:20.929081       1 main.go:301] handling current node
	I1013 21:04:30.927274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:04:30.927308       1 main.go:301] handling current node
	I1013 21:04:40.929074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:04:40.929109       1 main.go:301] handling current node
	I1013 21:04:50.920174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:04:50.920331       1 main.go:301] handling current node
	I1013 21:05:00.919949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:05:00.920011       1 main.go:301] handling current node
	
	
	==> kube-apiserver [65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b] <==
	I1013 20:59:59.458034       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.98.136.212"}
	W1013 21:00:18.945473       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:00:18.959956       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 21:00:19.009429       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:00:19.025848       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:00:31.349123       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.136.212:443: connect: connection refused
	E1013 21:00:31.349416       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.136.212:443: connect: connection refused" logger="UnhandledError"
	W1013 21:00:31.350066       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.136.212:443: connect: connection refused
	E1013 21:00:31.350085       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.136.212:443: connect: connection refused" logger="UnhandledError"
	W1013 21:00:31.415564       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.136.212:443: connect: connection refused
	E1013 21:00:31.415660       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.136.212:443: connect: connection refused" logger="UnhandledError"
	E1013 21:00:38.942806       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.140.108:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.140.108:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.140.108:443: connect: connection refused" logger="UnhandledError"
	W1013 21:00:38.944478       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 21:00:38.944596       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 21:00:38.980416       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1013 21:00:39.064159       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1013 21:02:00.006478       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43514: use of closed network connection
	E1013 21:02:00.662812       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43556: use of closed network connection
	I1013 21:02:36.959579       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1013 21:02:37.285426       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.152.34"}
	I1013 21:02:47.721707       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1013 21:04:59.843926       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.135.186"}
	
	
	==> kube-controller-manager [3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1] <==
	I1013 20:59:48.962854       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 20:59:48.962893       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 20:59:48.962922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 20:59:48.962961       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 20:59:48.963029       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 20:59:48.964228       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 20:59:48.964309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 20:59:48.964321       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 20:59:48.964330       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 20:59:48.964987       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 20:59:48.971563       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 20:59:48.974693       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 20:59:48.978020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 20:59:48.978044       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 20:59:48.978053       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 20:59:48.984194       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 20:59:48.989926       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1013 21:00:18.938337       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 21:00:18.938486       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1013 21:00:18.938542       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 21:00:18.997072       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1013 21:00:19.001285       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 21:00:19.039337       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:00:19.102494       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:00:33.972340       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3] <==
	I1013 20:59:50.724267       1 server_linux.go:53] "Using iptables proxy"
	I1013 20:59:50.801439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 20:59:50.901763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 20:59:50.901792       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 20:59:50.901864       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 20:59:50.952117       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 20:59:50.952172       1 server_linux.go:132] "Using iptables Proxier"
	I1013 20:59:50.962407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 20:59:50.974937       1 server.go:527] "Version info" version="v1.34.1"
	I1013 20:59:50.974973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 20:59:50.976510       1 config.go:200] "Starting service config controller"
	I1013 20:59:50.976520       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 20:59:50.976537       1 config.go:106] "Starting endpoint slice config controller"
	I1013 20:59:50.976541       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 20:59:50.976560       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 20:59:50.976565       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 20:59:50.977196       1 config.go:309] "Starting node config controller"
	I1013 20:59:50.977203       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 20:59:50.977209       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 20:59:51.077634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 20:59:51.077654       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 20:59:51.077666       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb] <==
	I1013 20:59:43.044295       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 20:59:43.046684       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 20:59:43.047105       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 20:59:43.047264       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 20:59:43.047130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 20:59:43.060526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 20:59:43.060723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 20:59:43.060802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 20:59:43.060870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 20:59:43.060976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 20:59:43.061068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 20:59:43.061211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 20:59:43.061364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 20:59:43.062058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 20:59:43.062111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 20:59:43.061996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 20:59:43.063903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 20:59:43.063939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 20:59:43.064008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 20:59:43.064168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 20:59:43.064179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 20:59:43.064225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 20:59:43.061635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 20:59:43.061729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1013 20:59:44.648313       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:03:28 addons-421494 kubelet[1290]: I1013 21:03:28.418790    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-lswkm" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:03:31 addons-421494 kubelet[1290]: I1013 21:03:31.418596    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-5nbln" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:03:50 addons-421494 kubelet[1290]: I1013 21:03:50.418830    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nfn7w" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:37 addons-421494 kubelet[1290]: I1013 21:04:37.418241    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-lswkm" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:41 addons-421494 kubelet[1290]: I1013 21:04:41.619756    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f5gvj" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:41 addons-421494 kubelet[1290]: W1013 21:04:41.653473    1290 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/crio-121caeffd17c59071bbd5cd7cd26ca036c769442254a4fdebadb5db204904c94 WatchSource:0}: Error finding container 121caeffd17c59071bbd5cd7cd26ca036c769442254a4fdebadb5db204904c94: Status 404 returned error can't find the container with id 121caeffd17c59071bbd5cd7cd26ca036c769442254a4fdebadb5db204904c94
	Oct 13 21:04:42 addons-421494 kubelet[1290]: I1013 21:04:42.418879    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-5nbln" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:44 addons-421494 kubelet[1290]: I1013 21:04:44.013184    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f5gvj" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:44 addons-421494 kubelet[1290]: I1013 21:04:44.013245    1290 scope.go:117] "RemoveContainer" containerID="640802af5603d407dca0841e0da073aba5d0e48bb6a63c26a7123d7f3c3b089d"
	Oct 13 21:04:44 addons-421494 kubelet[1290]: E1013 21:04:44.591708    1290 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/19d8431f4273348118e6a89d34eaea9d578ce91dae4316a98459b5c6f7e9d2a9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/19d8431f4273348118e6a89d34eaea9d578ce91dae4316a98459b5c6f7e9d2a9/diff: no such file or directory, extraDiskErr: <nil>
	Oct 13 21:04:44 addons-421494 kubelet[1290]: I1013 21:04:44.717397    1290 scope.go:117] "RemoveContainer" containerID="640802af5603d407dca0841e0da073aba5d0e48bb6a63c26a7123d7f3c3b089d"
	Oct 13 21:04:45 addons-421494 kubelet[1290]: I1013 21:04:45.038022    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f5gvj" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:45 addons-421494 kubelet[1290]: I1013 21:04:45.038076    1290 scope.go:117] "RemoveContainer" containerID="53e2bca6aed5e00f021353744eccb12e18472141e10c67610a42d0f2a7ed89e1"
	Oct 13 21:04:45 addons-421494 kubelet[1290]: E1013 21:04:45.038251    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f5gvj_kube-system(e6126817-d300-48b3-a682-ebad0a32e077)\"" pod="kube-system/registry-creds-764b6fb674-f5gvj" podUID="e6126817-d300-48b3-a682-ebad0a32e077"
	Oct 13 21:04:46 addons-421494 kubelet[1290]: I1013 21:04:46.041656    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f5gvj" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:46 addons-421494 kubelet[1290]: I1013 21:04:46.041718    1290 scope.go:117] "RemoveContainer" containerID="53e2bca6aed5e00f021353744eccb12e18472141e10c67610a42d0f2a7ed89e1"
	Oct 13 21:04:46 addons-421494 kubelet[1290]: E1013 21:04:46.041876    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f5gvj_kube-system(e6126817-d300-48b3-a682-ebad0a32e077)\"" pod="kube-system/registry-creds-764b6fb674-f5gvj" podUID="e6126817-d300-48b3-a682-ebad0a32e077"
	Oct 13 21:04:58 addons-421494 kubelet[1290]: I1013 21:04:58.418976    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f5gvj" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:58 addons-421494 kubelet[1290]: I1013 21:04:58.419485    1290 scope.go:117] "RemoveContainer" containerID="53e2bca6aed5e00f021353744eccb12e18472141e10c67610a42d0f2a7ed89e1"
	Oct 13 21:04:59 addons-421494 kubelet[1290]: I1013 21:04:59.089793    1290 scope.go:117] "RemoveContainer" containerID="53e2bca6aed5e00f021353744eccb12e18472141e10c67610a42d0f2a7ed89e1"
	Oct 13 21:04:59 addons-421494 kubelet[1290]: I1013 21:04:59.090360    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-f5gvj" secret="" err="secret \"gcp-auth\" not found"
	Oct 13 21:04:59 addons-421494 kubelet[1290]: I1013 21:04:59.090682    1290 scope.go:117] "RemoveContainer" containerID="0e6df95932d1b95e3e13abaa1427e6759f8d88bd4cb829c738db95bbe9cdf5c1"
	Oct 13 21:04:59 addons-421494 kubelet[1290]: E1013 21:04:59.096574    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-f5gvj_kube-system(e6126817-d300-48b3-a682-ebad0a32e077)\"" pod="kube-system/registry-creds-764b6fb674-f5gvj" podUID="e6126817-d300-48b3-a682-ebad0a32e077"
	Oct 13 21:04:59 addons-421494 kubelet[1290]: I1013 21:04:59.810932    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7fa312c7-3394-49a1-9398-6f96f88a589e-gcp-creds\") pod \"hello-world-app-5d498dc89-8t2lp\" (UID: \"7fa312c7-3394-49a1-9398-6f96f88a589e\") " pod="default/hello-world-app-5d498dc89-8t2lp"
	Oct 13 21:04:59 addons-421494 kubelet[1290]: I1013 21:04:59.811592    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9dj\" (UniqueName: \"kubernetes.io/projected/7fa312c7-3394-49a1-9398-6f96f88a589e-kube-api-access-7n9dj\") pod \"hello-world-app-5d498dc89-8t2lp\" (UID: \"7fa312c7-3394-49a1-9398-6f96f88a589e\") " pod="default/hello-world-app-5d498dc89-8t2lp"
	
	
	==> storage-provisioner [24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7] <==
	W1013 21:04:38.113848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:40.117573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:40.127164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:42.139191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:42.152817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:44.155733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:44.159982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:46.163541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:46.168079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:48.171310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:48.175271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:50.178189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:50.183212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:52.185964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:52.190453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:54.193743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:54.198335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:56.200945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:56.207729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:58.211704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:04:58.216671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:05:00.222544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:05:00.232093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:05:02.235121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:05:02.240263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-421494 -n addons-421494
helpers_test.go:269: (dbg) Run:  kubectl --context addons-421494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-421494 describe pod ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-421494 describe pod ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4: exit status 1 (86.893825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-97mkt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vjwq4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-421494 describe pod ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (259.587544ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:05:03.420384   14799 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:05:03.420558   14799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:05:03.420567   14799 out.go:374] Setting ErrFile to fd 2...
	I1013 21:05:03.420573   14799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:05:03.420823   14799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:05:03.421138   14799 mustload.go:65] Loading cluster: addons-421494
	I1013 21:05:03.421484   14799 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:05:03.421502   14799 addons.go:606] checking whether the cluster is paused
	I1013 21:05:03.421604   14799 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:05:03.421627   14799 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:05:03.422090   14799 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:05:03.442082   14799 ssh_runner.go:195] Run: systemctl --version
	I1013 21:05:03.442150   14799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:05:03.461292   14799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:05:03.562887   14799 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:05:03.562967   14799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:05:03.596283   14799 cri.go:89] found id: "0e6df95932d1b95e3e13abaa1427e6759f8d88bd4cb829c738db95bbe9cdf5c1"
	I1013 21:05:03.596302   14799 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:05:03.596307   14799 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:05:03.596311   14799 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:05:03.596314   14799 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:05:03.596318   14799 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:05:03.596333   14799 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:05:03.596336   14799 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:05:03.596340   14799 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:05:03.596346   14799 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:05:03.596350   14799 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:05:03.596353   14799 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:05:03.596356   14799 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:05:03.596360   14799 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:05:03.596364   14799 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:05:03.596368   14799 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:05:03.596372   14799 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:05:03.596375   14799 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:05:03.596378   14799 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:05:03.596381   14799 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:05:03.596390   14799 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:05:03.596394   14799 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:05:03.596397   14799 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:05:03.596401   14799 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:05:03.596410   14799 cri.go:89] found id: ""
	I1013 21:05:03.596458   14799 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:05:03.611238   14799 out.go:203] 
	W1013 21:05:03.614225   14799 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:05:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:05:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:05:03.614249   14799 out.go:285] * 
	* 
	W1013 21:05:03.618949   14799 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:05:03.621889   14799 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable ingress --alsologtostderr -v=1: exit status 11 (259.684543ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:05:03.681094   14843 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:05:03.681309   14843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:05:03.681342   14843 out.go:374] Setting ErrFile to fd 2...
	I1013 21:05:03.681364   14843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:05:03.681755   14843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:05:03.682149   14843 mustload.go:65] Loading cluster: addons-421494
	I1013 21:05:03.682828   14843 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:05:03.682878   14843 addons.go:606] checking whether the cluster is paused
	I1013 21:05:03.683029   14843 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:05:03.683069   14843 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:05:03.683768   14843 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:05:03.700685   14843 ssh_runner.go:195] Run: systemctl --version
	I1013 21:05:03.700750   14843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:05:03.718952   14843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:05:03.826137   14843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:05:03.826232   14843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:05:03.856407   14843 cri.go:89] found id: "0e6df95932d1b95e3e13abaa1427e6759f8d88bd4cb829c738db95bbe9cdf5c1"
	I1013 21:05:03.856428   14843 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:05:03.856434   14843 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:05:03.856438   14843 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:05:03.856443   14843 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:05:03.856446   14843 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:05:03.856449   14843 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:05:03.856452   14843 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:05:03.856455   14843 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:05:03.856461   14843 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:05:03.856464   14843 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:05:03.856467   14843 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:05:03.856470   14843 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:05:03.856474   14843 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:05:03.856478   14843 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:05:03.856484   14843 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:05:03.856487   14843 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:05:03.856492   14843 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:05:03.856495   14843 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:05:03.856498   14843 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:05:03.856504   14843 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:05:03.856511   14843 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:05:03.856514   14843 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:05:03.856517   14843 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:05:03.856520   14843 cri.go:89] found id: ""
	I1013 21:05:03.856569   14843 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:05:03.871140   14843 out.go:203] 
	W1013 21:05:03.874049   14843 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:05:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:05:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:05:03.874089   14843 out.go:285] * 
	* 
	W1013 21:05:03.878819   14843 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:05:03.881803   14843 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lrgtr" [5b3ae878-3eb3-475a-8ddb-f65f45f6a246] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003456977s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (308.653449ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:36.373726   12647 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:36.373964   12647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:36.373979   12647 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:36.373984   12647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:36.374277   12647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:36.374685   12647 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:36.375045   12647 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:36.375054   12647 addons.go:606] checking whether the cluster is paused
	I1013 21:02:36.375154   12647 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:36.375167   12647 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:36.375621   12647 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:36.412084   12647 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:36.412213   12647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:36.441688   12647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:36.546065   12647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:36.546141   12647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:36.576796   12647 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:36.576815   12647 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:36.576820   12647 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:36.576823   12647 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:36.576827   12647 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:36.576830   12647 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:36.576838   12647 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:36.576841   12647 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:36.576844   12647 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:36.576850   12647 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:36.576854   12647 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:36.576857   12647 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:36.576860   12647 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:36.576863   12647 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:36.576866   12647 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:36.576871   12647 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:36.576874   12647 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:36.576877   12647 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:36.576881   12647 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:36.576884   12647 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:36.576890   12647 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:36.576893   12647 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:36.576896   12647 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:36.576899   12647 cri.go:89] found id: ""
	I1013 21:02:36.576945   12647 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:36.592547   12647 out.go:203] 
	W1013 21:02:36.595362   12647 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:36.595388   12647 out.go:285] * 
	* 
	W1013 21:02:36.600765   12647 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:36.603907   12647 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.386708ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-hrqb8" [496e3426-b9d3-4219-ba0d-ab73c596e817] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004144282s
addons_test.go:463: (dbg) Run:  kubectl --context addons-421494 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (300.184015ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:30.085297   12558 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:30.085554   12558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:30.085594   12558 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:30.085616   12558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:30.085924   12558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:30.086287   12558 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:30.086748   12558 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:30.086796   12558 addons.go:606] checking whether the cluster is paused
	I1013 21:02:30.086937   12558 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:30.086981   12558 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:30.087500   12558 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:30.110843   12558 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:30.110919   12558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:30.137667   12558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:30.238407   12558 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:30.238500   12558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:30.268522   12558 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:30.268545   12558 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:30.268550   12558 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:30.268558   12558 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:30.268562   12558 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:30.268566   12558 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:30.268570   12558 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:30.268573   12558 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:30.268576   12558 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:30.268583   12558 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:30.268586   12558 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:30.268589   12558 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:30.268592   12558 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:30.268595   12558 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:30.268598   12558 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:30.268603   12558 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:30.268606   12558 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:30.268609   12558 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:30.268612   12558 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:30.268615   12558 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:30.268620   12558 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:30.268623   12558 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:30.268626   12558 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:30.268628   12558 cri.go:89] found id: ""
	I1013 21:02:30.268681   12558 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:30.283960   12558 out.go:203] 
	W1013 21:02:30.286780   12558 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:30.286802   12558 out.go:285] * 
	* 
	W1013 21:02:30.291655   12558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:30.294503   12558 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1013 21:02:21.055145    4299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1013 21:02:21.064166    4299 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1013 21:02:21.064190    4299 kapi.go:107] duration metric: took 9.06054ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.070188ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-421494 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-421494 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [207a6320-f77e-48e8-a6e8-d423e3dcbcb9] Pending
helpers_test.go:352: "task-pv-pod" [207a6320-f77e-48e8-a6e8-d423e3dcbcb9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [207a6320-f77e-48e8-a6e8-d423e3dcbcb9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004095092s
addons_test.go:572: (dbg) Run:  kubectl --context addons-421494 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-421494 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-421494 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-421494 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-421494 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-421494 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-421494 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c8fb92a2-07ba-4760-8db8-7c6db46bfeb1] Pending
helpers_test.go:352: "task-pv-pod-restore" [c8fb92a2-07ba-4760-8db8-7c6db46bfeb1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c8fb92a2-07ba-4760-8db8-7c6db46bfeb1] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005209245s
addons_test.go:614: (dbg) Run:  kubectl --context addons-421494 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-421494 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-421494 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (272.558329ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:03:01.134909   13513 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:03:01.135137   13513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:03:01.135144   13513 out.go:374] Setting ErrFile to fd 2...
	I1013 21:03:01.135148   13513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:03:01.135489   13513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:03:01.135858   13513 mustload.go:65] Loading cluster: addons-421494
	I1013 21:03:01.136314   13513 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:03:01.136327   13513 addons.go:606] checking whether the cluster is paused
	I1013 21:03:01.136432   13513 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:03:01.136445   13513 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:03:01.136865   13513 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:03:01.158505   13513 ssh_runner.go:195] Run: systemctl --version
	I1013 21:03:01.158562   13513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:03:01.176391   13513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:03:01.282412   13513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:03:01.282497   13513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:03:01.315480   13513 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:03:01.315498   13513 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:03:01.315503   13513 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:03:01.315507   13513 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:03:01.315511   13513 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:03:01.315515   13513 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:03:01.315518   13513 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:03:01.315521   13513 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:03:01.315524   13513 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:03:01.315531   13513 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:03:01.315534   13513 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:03:01.315537   13513 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:03:01.315540   13513 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:03:01.315543   13513 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:03:01.315546   13513 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:03:01.315552   13513 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:03:01.315556   13513 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:03:01.315560   13513 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:03:01.315564   13513 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:03:01.315567   13513 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:03:01.315573   13513 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:03:01.315576   13513 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:03:01.315579   13513 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:03:01.315581   13513 cri.go:89] found id: ""
	I1013 21:03:01.315631   13513 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:03:01.330696   13513 out.go:203] 
	W1013 21:03:01.333781   13513 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:03:01.333809   13513 out.go:285] * 
	* 
	W1013 21:03:01.340189   13513 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:03:01.343113   13513 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (255.435592ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:03:01.405032   13556 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:03:01.405232   13556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:03:01.405266   13556 out.go:374] Setting ErrFile to fd 2...
	I1013 21:03:01.405287   13556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:03:01.405559   13556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:03:01.405865   13556 mustload.go:65] Loading cluster: addons-421494
	I1013 21:03:01.406259   13556 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:03:01.406300   13556 addons.go:606] checking whether the cluster is paused
	I1013 21:03:01.406432   13556 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:03:01.406476   13556 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:03:01.406991   13556 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:03:01.423496   13556 ssh_runner.go:195] Run: systemctl --version
	I1013 21:03:01.423544   13556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:03:01.441317   13556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:03:01.542166   13556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:03:01.542299   13556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:03:01.571500   13556 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:03:01.571525   13556 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:03:01.571530   13556 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:03:01.571534   13556 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:03:01.571538   13556 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:03:01.571541   13556 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:03:01.571544   13556 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:03:01.571547   13556 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:03:01.571551   13556 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:03:01.571557   13556 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:03:01.571560   13556 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:03:01.571564   13556 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:03:01.571567   13556 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:03:01.571571   13556 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:03:01.571574   13556 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:03:01.571585   13556 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:03:01.571591   13556 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:03:01.571596   13556 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:03:01.571600   13556 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:03:01.571603   13556 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:03:01.571607   13556 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:03:01.571614   13556 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:03:01.571627   13556 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:03:01.571630   13556 cri.go:89] found id: ""
	I1013 21:03:01.571681   13556 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:03:01.588257   13556 out.go:203] 
	W1013 21:03:01.591141   13556 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:03:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:03:01.591165   13556 out.go:285] * 
	* 
	W1013 21:03:01.596064   13556 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:03:01.599017   13556 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-421494 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-421494 --alsologtostderr -v=1: exit status 11 (322.360967ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:20.381492   11871 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:20.381730   11871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:20.381765   11871 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:20.381784   11871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:20.382065   11871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:20.382372   11871 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:20.382754   11871 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:20.382786   11871 addons.go:606] checking whether the cluster is paused
	I1013 21:02:20.382911   11871 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:20.382942   11871 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:20.383451   11871 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:20.411964   11871 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:20.412019   11871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:20.442957   11871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:20.550405   11871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:20.550486   11871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:20.591494   11871 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:20.591513   11871 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:20.591522   11871 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:20.591527   11871 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:20.591531   11871 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:20.591535   11871 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:20.591538   11871 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:20.591541   11871 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:20.591544   11871 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:20.591550   11871 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:20.591553   11871 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:20.591556   11871 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:20.591559   11871 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:20.591562   11871 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:20.591565   11871 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:20.591570   11871 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:20.591573   11871 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:20.591576   11871 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:20.591579   11871 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:20.591582   11871 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:20.591587   11871 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:20.591590   11871 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:20.591592   11871 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:20.591595   11871 cri.go:89] found id: ""
	I1013 21:02:20.591649   11871 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:20.613609   11871 out.go:203] 
	W1013 21:02:20.616770   11871 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:20.616852   11871 out.go:285] * 
	* 
	W1013 21:02:20.621631   11871 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:20.625312   11871 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-421494 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-421494
helpers_test.go:243: (dbg) docker inspect addons-421494:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512",
	        "Created": "2025-10-13T20:59:17.041522545Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T20:59:17.112130573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512-json.log",
	        "Name": "/addons-421494",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-421494:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-421494",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512",
	                "LowerDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25abef24ec30fd29758dd2d6150d3c107a3ce08958a2d71d9122456d332c01d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-421494",
	                "Source": "/var/lib/docker/volumes/addons-421494/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-421494",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-421494",
	                "name.minikube.sigs.k8s.io": "addons-421494",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "15ae7685133aacb8b7f906637e00bcddee85eb0d94e5046fe4cc0f0bdbe1664f",
	            "SandboxKey": "/var/run/docker/netns/15ae7685133a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-421494": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:59:19:4d:f8:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "41efece0a838293653cc76ebcfe24b4727fd0e7cae57be2a13c239908efd9641",
	                    "EndpointID": "9db857c7e9e2557894fe6175472e8a9f13efed151d2398df7331def191faecaa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-421494",
	                        "1c1825622e98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-421494 -n addons-421494
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-421494 logs -n 25: (1.658126943s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-422444 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-422444   │ jenkins │ v1.37.0 │ 13 Oct 25 20:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ delete  │ -p download-only-422444                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-422444   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ start   │ -o=json --download-only -p download-only-923308 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-923308   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ delete  │ -p download-only-923308                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-923308   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ delete  │ -p download-only-422444                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-422444   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ delete  │ -p download-only-923308                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-923308   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ start   │ --download-only -p download-docker-875751 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-875751 │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ delete  │ -p download-docker-875751                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-875751 │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ start   │ --download-only -p binary-mirror-313294 --alsologtostderr --binary-mirror http://127.0.0.1:46681 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-313294   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ delete  │ -p binary-mirror-313294                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-313294   │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ addons  │ disable dashboard -p addons-421494                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ addons  │ enable dashboard -p addons-421494                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	│ start   │ -p addons-421494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 21:01 UTC │
	│ addons  │ addons-421494 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:01 UTC │                     │
	│ addons  │ addons-421494 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ ip      │ addons-421494 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │ 13 Oct 25 21:02 UTC │
	│ addons  │ addons-421494 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ ssh     │ addons-421494 ssh cat /opt/local-path-provisioner/pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │ 13 Oct 25 21:02 UTC │
	│ addons  │ addons-421494 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ enable headlamp -p addons-421494 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	│ addons  │ addons-421494 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-421494          │ jenkins │ v1.37.0 │ 13 Oct 25 21:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 20:58:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 20:58:51.347613    5057 out.go:360] Setting OutFile to fd 1 ...
	I1013 20:58:51.347825    5057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:58:51.347852    5057 out.go:374] Setting ErrFile to fd 2...
	I1013 20:58:51.347873    5057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:58:51.348172    5057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 20:58:51.348666    5057 out.go:368] Setting JSON to false
	I1013 20:58:51.349449    5057 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2466,"bootTime":1760386666,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 20:58:51.349541    5057 start.go:141] virtualization:  
	I1013 20:58:51.352988    5057 out.go:179] * [addons-421494] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 20:58:51.355985    5057 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 20:58:51.356032    5057 notify.go:220] Checking for updates...
	I1013 20:58:51.361661    5057 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 20:58:51.364593    5057 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 20:58:51.367332    5057 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 20:58:51.370444    5057 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 20:58:51.373239    5057 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 20:58:51.376213    5057 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 20:58:51.396575    5057 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 20:58:51.396692    5057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:58:51.458133    5057 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-13 20:58:51.448947061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:58:51.458238    5057 docker.go:318] overlay module found
	I1013 20:58:51.461220    5057 out.go:179] * Using the docker driver based on user configuration
	I1013 20:58:51.464063    5057 start.go:305] selected driver: docker
	I1013 20:58:51.464092    5057 start.go:925] validating driver "docker" against <nil>
	I1013 20:58:51.464105    5057 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 20:58:51.464834    5057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:58:51.517901    5057 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-13 20:58:51.508772532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:58:51.518071    5057 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 20:58:51.518297    5057 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 20:58:51.521233    5057 out.go:179] * Using Docker driver with root privileges
	I1013 20:58:51.524077    5057 cni.go:84] Creating CNI manager for ""
	I1013 20:58:51.524147    5057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:58:51.524157    5057 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 20:58:51.524237    5057 start.go:349] cluster config:
	{Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1013 20:58:51.527362    5057 out.go:179] * Starting "addons-421494" primary control-plane node in "addons-421494" cluster
	I1013 20:58:51.530197    5057 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 20:58:51.533121    5057 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 20:58:51.536082    5057 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 20:58:51.536159    5057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:58:51.536195    5057 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 20:58:51.536207    5057 cache.go:58] Caching tarball of preloaded images
	I1013 20:58:51.536285    5057 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 20:58:51.536299    5057 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 20:58:51.536658    5057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/config.json ...
	I1013 20:58:51.536684    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/config.json: {Name:mk2741074136a1d96fd52bb31764367dd6839187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:58:51.553011    5057 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 20:58:51.553149    5057 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1013 20:58:51.553172    5057 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1013 20:58:51.553177    5057 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1013 20:58:51.553185    5057 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1013 20:58:51.553194    5057 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1013 20:59:09.227967    5057 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1013 20:59:09.228009    5057 cache.go:232] Successfully downloaded all kic artifacts
	I1013 20:59:09.228037    5057 start.go:360] acquireMachinesLock for addons-421494: {Name:mke133de16fa3a5dbff16f3894584bfb771c3296 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 20:59:09.228171    5057 start.go:364] duration metric: took 114.049µs to acquireMachinesLock for "addons-421494"
	I1013 20:59:09.228202    5057 start.go:93] Provisioning new machine with config: &{Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 20:59:09.228275    5057 start.go:125] createHost starting for "" (driver="docker")
	I1013 20:59:09.231545    5057 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1013 20:59:09.231769    5057 start.go:159] libmachine.API.Create for "addons-421494" (driver="docker")
	I1013 20:59:09.231832    5057 client.go:168] LocalClient.Create starting
	I1013 20:59:09.231955    5057 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 20:59:09.597058    5057 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 20:59:09.963914    5057 cli_runner.go:164] Run: docker network inspect addons-421494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 20:59:09.979759    5057 cli_runner.go:211] docker network inspect addons-421494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 20:59:09.979861    5057 network_create.go:284] running [docker network inspect addons-421494] to gather additional debugging logs...
	I1013 20:59:09.979882    5057 cli_runner.go:164] Run: docker network inspect addons-421494
	W1013 20:59:09.994370    5057 cli_runner.go:211] docker network inspect addons-421494 returned with exit code 1
	I1013 20:59:09.994401    5057 network_create.go:287] error running [docker network inspect addons-421494]: docker network inspect addons-421494: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-421494 not found
	I1013 20:59:09.994414    5057 network_create.go:289] output of [docker network inspect addons-421494]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-421494 not found
	
	** /stderr **
	I1013 20:59:09.994507    5057 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 20:59:10.021050    5057 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f4770}
	I1013 20:59:10.021092    5057 network_create.go:124] attempt to create docker network addons-421494 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1013 20:59:10.021167    5057 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-421494 addons-421494
	I1013 20:59:10.086023    5057 network_create.go:108] docker network addons-421494 192.168.49.0/24 created
	I1013 20:59:10.086066    5057 kic.go:121] calculated static IP "192.168.49.2" for the "addons-421494" container
	I1013 20:59:10.086212    5057 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 20:59:10.104025    5057 cli_runner.go:164] Run: docker volume create addons-421494 --label name.minikube.sigs.k8s.io=addons-421494 --label created_by.minikube.sigs.k8s.io=true
	I1013 20:59:10.124043    5057 oci.go:103] Successfully created a docker volume addons-421494
	I1013 20:59:10.124147    5057 cli_runner.go:164] Run: docker run --rm --name addons-421494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-421494 --entrypoint /usr/bin/test -v addons-421494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 20:59:12.578924    5057 cli_runner.go:217] Completed: docker run --rm --name addons-421494-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-421494 --entrypoint /usr/bin/test -v addons-421494:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.454736981s)
	I1013 20:59:12.578955    5057 oci.go:107] Successfully prepared a docker volume addons-421494
	I1013 20:59:12.578976    5057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:59:12.578994    5057 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 20:59:12.579061    5057 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-421494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 20:59:16.970989    5057 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-421494:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.391883746s)
	I1013 20:59:16.971023    5057 kic.go:203] duration metric: took 4.392025356s to extract preloaded images to volume ...
	W1013 20:59:16.971164    5057 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 20:59:16.971277    5057 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 20:59:17.026126    5057 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-421494 --name addons-421494 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-421494 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-421494 --network addons-421494 --ip 192.168.49.2 --volume addons-421494:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 20:59:17.361153    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Running}}
	I1013 20:59:17.382881    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:17.407022    5057 cli_runner.go:164] Run: docker exec addons-421494 stat /var/lib/dpkg/alternatives/iptables
	I1013 20:59:17.456980    5057 oci.go:144] the created container "addons-421494" has a running status.
	I1013 20:59:17.457009    5057 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa...
	I1013 20:59:18.394964    5057 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 20:59:18.414536    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:18.432746    5057 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 20:59:18.432769    5057 kic_runner.go:114] Args: [docker exec --privileged addons-421494 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 20:59:18.470258    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:18.488178    5057 machine.go:93] provisionDockerMachine start ...
	I1013 20:59:18.488276    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:18.504255    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:18.504586    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:18.504603    5057 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 20:59:18.505184    5057 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 20:59:21.647182    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-421494
	
	I1013 20:59:21.647278    5057 ubuntu.go:182] provisioning hostname "addons-421494"
	I1013 20:59:21.647360    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:21.664740    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:21.665047    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:21.665062    5057 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-421494 && echo "addons-421494" | sudo tee /etc/hostname
	I1013 20:59:21.816376    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-421494
	
	I1013 20:59:21.816452    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:21.833875    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:21.834182    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:21.834211    5057 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-421494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-421494/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-421494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 20:59:21.975715    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 20:59:21.975740    5057 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 20:59:21.975800    5057 ubuntu.go:190] setting up certificates
	I1013 20:59:21.975812    5057 provision.go:84] configureAuth start
	I1013 20:59:21.975871    5057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-421494
	I1013 20:59:21.992483    5057 provision.go:143] copyHostCerts
	I1013 20:59:21.992562    5057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 20:59:21.992695    5057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 20:59:21.992760    5057 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 20:59:21.992844    5057 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.addons-421494 san=[127.0.0.1 192.168.49.2 addons-421494 localhost minikube]
	I1013 20:59:22.877383    5057 provision.go:177] copyRemoteCerts
	I1013 20:59:22.877450    5057 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 20:59:22.877515    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:22.893825    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:22.994770    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 20:59:23.012553    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1013 20:59:23.029591    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 20:59:23.045667    5057 provision.go:87] duration metric: took 1.069831765s to configureAuth
	I1013 20:59:23.045695    5057 ubuntu.go:206] setting minikube options for container-runtime
	I1013 20:59:23.045872    5057 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 20:59:23.045981    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.062516    5057 main.go:141] libmachine: Using SSH client type: native
	I1013 20:59:23.062828    5057 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1013 20:59:23.062848    5057 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 20:59:23.306556    5057 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 20:59:23.306575    5057 machine.go:96] duration metric: took 4.818377495s to provisionDockerMachine
	I1013 20:59:23.306585    5057 client.go:171] duration metric: took 14.074740166s to LocalClient.Create
	I1013 20:59:23.306597    5057 start.go:167] duration metric: took 14.074828746s to libmachine.API.Create "addons-421494"
	I1013 20:59:23.306604    5057 start.go:293] postStartSetup for "addons-421494" (driver="docker")
	I1013 20:59:23.306614    5057 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 20:59:23.306671    5057 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 20:59:23.306711    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.323696    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.423342    5057 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 20:59:23.426401    5057 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 20:59:23.426427    5057 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 20:59:23.426438    5057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 20:59:23.426498    5057 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 20:59:23.426519    5057 start.go:296] duration metric: took 119.908986ms for postStartSetup
	I1013 20:59:23.426819    5057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-421494
	I1013 20:59:23.445621    5057 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/config.json ...
	I1013 20:59:23.445891    5057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 20:59:23.445929    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.462794    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.560811    5057 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 20:59:23.565504    5057 start.go:128] duration metric: took 14.337214231s to createHost
	I1013 20:59:23.565525    5057 start.go:83] releasing machines lock for "addons-421494", held for 14.337339438s
	I1013 20:59:23.565595    5057 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-421494
	I1013 20:59:23.582524    5057 ssh_runner.go:195] Run: cat /version.json
	I1013 20:59:23.582574    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.582840    5057 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 20:59:23.582892    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:23.599859    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.607980    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:23.787558    5057 ssh_runner.go:195] Run: systemctl --version
	I1013 20:59:23.793579    5057 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 20:59:23.828661    5057 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 20:59:23.832708    5057 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 20:59:23.832785    5057 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 20:59:23.860045    5057 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 20:59:23.860111    5057 start.go:495] detecting cgroup driver to use...
	I1013 20:59:23.860149    5057 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 20:59:23.860199    5057 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 20:59:23.876954    5057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 20:59:23.889208    5057 docker.go:218] disabling cri-docker service (if available) ...
	I1013 20:59:23.889270    5057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 20:59:23.906273    5057 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 20:59:23.923570    5057 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 20:59:24.031041    5057 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 20:59:24.156867    5057 docker.go:234] disabling docker service ...
	I1013 20:59:24.156929    5057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 20:59:24.175868    5057 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 20:59:24.188355    5057 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 20:59:24.310522    5057 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 20:59:24.424135    5057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 20:59:24.436184    5057 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 20:59:24.448846    5057 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 20:59:24.448915    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.456690    5057 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 20:59:24.456801    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.464707    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.472238    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.479967    5057 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 20:59:24.487314    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.494925    5057 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.506899    5057 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 20:59:24.515690    5057 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 20:59:24.522848    5057 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1013 20:59:24.522924    5057 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1013 20:59:24.536343    5057 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 20:59:24.543820    5057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 20:59:24.649781    5057 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 20:59:24.772857    5057 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 20:59:24.772936    5057 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 20:59:24.776417    5057 start.go:563] Will wait 60s for crictl version
	I1013 20:59:24.776475    5057 ssh_runner.go:195] Run: which crictl
	I1013 20:59:24.779592    5057 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 20:59:24.804884    5057 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 20:59:24.805012    5057 ssh_runner.go:195] Run: crio --version
	I1013 20:59:24.835696    5057 ssh_runner.go:195] Run: crio --version
	I1013 20:59:24.868481    5057 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 20:59:24.871218    5057 cli_runner.go:164] Run: docker network inspect addons-421494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 20:59:24.894013    5057 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 20:59:24.897517    5057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 20:59:24.906956    5057 kubeadm.go:883] updating cluster {Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 20:59:24.907077    5057 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:59:24.907135    5057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 20:59:24.939322    5057 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 20:59:24.939343    5057 crio.go:433] Images already preloaded, skipping extraction
	I1013 20:59:24.939397    5057 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 20:59:24.963970    5057 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 20:59:24.963992    5057 cache_images.go:85] Images are preloaded, skipping loading
	I1013 20:59:24.964000    5057 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1013 20:59:24.964091    5057 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-421494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 20:59:24.964175    5057 ssh_runner.go:195] Run: crio config
	I1013 20:59:25.017391    5057 cni.go:84] Creating CNI manager for ""
	I1013 20:59:25.017416    5057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:59:25.017463    5057 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 20:59:25.017497    5057 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-421494 NodeName:addons-421494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 20:59:25.017719    5057 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-421494"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 20:59:25.017815    5057 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 20:59:25.025866    5057 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 20:59:25.025976    5057 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 20:59:25.033605    5057 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1013 20:59:25.046739    5057 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 20:59:25.058865    5057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1013 20:59:25.070903    5057 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 20:59:25.074367    5057 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 20:59:25.083833    5057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 20:59:25.209873    5057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 20:59:25.224550    5057 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494 for IP: 192.168.49.2
	I1013 20:59:25.224612    5057 certs.go:195] generating shared ca certs ...
	I1013 20:59:25.224642    5057 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.224786    5057 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 20:59:25.591047    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt ...
	I1013 20:59:25.591077    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt: {Name:mk8d9df9f97f37a0e7946e483b0cf0cab6dca92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.591263    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key ...
	I1013 20:59:25.591275    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key: {Name:mk61b8d997f2b410c27c4783c8cf57f766b1ba78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.591367    5057 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 20:59:25.927229    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt ...
	I1013 20:59:25.927252    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt: {Name:mk69437ac22a94b21039b7f8a2ae52550cf27a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.927399    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key ...
	I1013 20:59:25.927406    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key: {Name:mk44f1041f8c214a497a0fd3fdfa68d761f9a861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:25.927469    5057 certs.go:257] generating profile certs ...
	I1013 20:59:25.927521    5057 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.key
	I1013 20:59:25.927533    5057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt with IP's: []
	I1013 20:59:26.150839    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt ...
	I1013 20:59:26.150870    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: {Name:mk81ace3874509099a8b83d36f63ec14297cef29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.151061    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.key ...
	I1013 20:59:26.151074    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.key: {Name:mk6dafd4878763af63ae414731b4047cb774d060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.151153    5057 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec
	I1013 20:59:26.151173    5057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1013 20:59:26.511930    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec ...
	I1013 20:59:26.511963    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec: {Name:mkc8c14d78802ad91717649469d7012488c7e448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.512146    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec ...
	I1013 20:59:26.512159    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec: {Name:mk4321121bc9ea7dec0507db2c366785884a723d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.512243    5057 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt.c1266fec -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt
	I1013 20:59:26.512326    5057 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key.c1266fec -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key
	I1013 20:59:26.512381    5057 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key
	I1013 20:59:26.512400    5057 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt with IP's: []
	I1013 20:59:26.816031    5057 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt ...
	I1013 20:59:26.816058    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt: {Name:mkc55fe7b19842ba0f74e6abe8297181c9f920a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.816229    5057 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key ...
	I1013 20:59:26.816241    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key: {Name:mk8fc4bb5d8f5d7f52726b9c3b46816d61ded9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:26.816430    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 20:59:26.816481    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 20:59:26.816514    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 20:59:26.816541    5057 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 20:59:26.817100    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 20:59:26.834971    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 20:59:26.852405    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 20:59:26.870674    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 20:59:26.887008    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 20:59:26.903524    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 20:59:26.920034    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 20:59:26.936182    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 20:59:26.952111    5057 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 20:59:26.968451    5057 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 20:59:26.980753    5057 ssh_runner.go:195] Run: openssl version
	I1013 20:59:26.986630    5057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 20:59:26.994893    5057 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 20:59:26.998214    5057 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 20:59:26.998275    5057 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 20:59:27.039128    5057 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 20:59:27.047095    5057 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 20:59:27.050241    5057 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 20:59:27.050287    5057 kubeadm.go:400] StartCluster: {Name:addons-421494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-421494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 20:59:27.050365    5057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 20:59:27.050434    5057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 20:59:27.075355    5057 cri.go:89] found id: ""
	I1013 20:59:27.075481    5057 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 20:59:27.082928    5057 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 20:59:27.090106    5057 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 20:59:27.090188    5057 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 20:59:27.097693    5057 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 20:59:27.097712    5057 kubeadm.go:157] found existing configuration files:
	
	I1013 20:59:27.097761    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 20:59:27.104892    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 20:59:27.104977    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 20:59:27.111718    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 20:59:27.118842    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 20:59:27.118904    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 20:59:27.125611    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 20:59:27.132489    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 20:59:27.132546    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 20:59:27.139068    5057 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 20:59:27.145877    5057 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 20:59:27.145934    5057 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 20:59:27.152457    5057 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 20:59:27.192696    5057 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 20:59:27.192769    5057 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 20:59:27.217832    5057 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 20:59:27.217940    5057 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 20:59:27.217981    5057 kubeadm.go:318] OS: Linux
	I1013 20:59:27.218043    5057 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 20:59:27.218108    5057 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 20:59:27.218172    5057 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 20:59:27.218243    5057 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 20:59:27.218311    5057 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 20:59:27.218373    5057 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 20:59:27.218433    5057 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 20:59:27.218498    5057 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 20:59:27.218553    5057 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 20:59:27.283268    5057 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 20:59:27.283400    5057 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 20:59:27.283506    5057 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 20:59:27.290373    5057 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 20:59:27.296663    5057 out.go:252]   - Generating certificates and keys ...
	I1013 20:59:27.296779    5057 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 20:59:27.296862    5057 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 20:59:27.464961    5057 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 20:59:29.295969    5057 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 20:59:29.824044    5057 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 20:59:30.362277    5057 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 20:59:31.596921    5057 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 20:59:31.597213    5057 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-421494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 20:59:32.927724    5057 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 20:59:32.928065    5057 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-421494 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1013 20:59:33.248589    5057 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 20:59:33.417158    5057 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 20:59:33.630091    5057 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 20:59:33.630367    5057 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 20:59:34.147513    5057 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 20:59:34.732438    5057 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 20:59:35.119123    5057 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 20:59:35.738292    5057 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 20:59:36.033106    5057 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 20:59:36.033744    5057 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 20:59:36.036529    5057 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 20:59:36.040176    5057 out.go:252]   - Booting up control plane ...
	I1013 20:59:36.040298    5057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 20:59:36.040386    5057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 20:59:36.040462    5057 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 20:59:36.056488    5057 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 20:59:36.056621    5057 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 20:59:36.064086    5057 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 20:59:36.070062    5057 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 20:59:36.070501    5057 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 20:59:36.200083    5057 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 20:59:36.200208    5057 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 20:59:37.201418    5057 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001652111s
	I1013 20:59:37.204814    5057 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 20:59:37.204911    5057 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1013 20:59:37.205240    5057 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 20:59:37.205338    5057 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 20:59:41.036576    5057 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.831345254s
	I1013 20:59:43.060978    5057 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.856081814s
	I1013 20:59:43.706461    5057 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501519283s
	I1013 20:59:43.728815    5057 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 20:59:43.741178    5057 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 20:59:43.756276    5057 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 20:59:43.756499    5057 kubeadm.go:318] [mark-control-plane] Marking the node addons-421494 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 20:59:43.768562    5057 kubeadm.go:318] [bootstrap-token] Using token: b7wyk6.mqwpjdody0hqmiej
	I1013 20:59:43.771511    5057 out.go:252]   - Configuring RBAC rules ...
	I1013 20:59:43.771639    5057 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 20:59:43.775924    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 20:59:43.786549    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 20:59:43.795358    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 20:59:43.799205    5057 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 20:59:43.803827    5057 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 20:59:44.114737    5057 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 20:59:44.550248    5057 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 20:59:45.132476    5057 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 20:59:45.132528    5057 kubeadm.go:318] 
	I1013 20:59:45.132639    5057 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 20:59:45.132669    5057 kubeadm.go:318] 
	I1013 20:59:45.132754    5057 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 20:59:45.132759    5057 kubeadm.go:318] 
	I1013 20:59:45.132791    5057 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 20:59:45.132857    5057 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 20:59:45.132911    5057 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 20:59:45.132916    5057 kubeadm.go:318] 
	I1013 20:59:45.132981    5057 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 20:59:45.132987    5057 kubeadm.go:318] 
	I1013 20:59:45.133045    5057 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 20:59:45.133052    5057 kubeadm.go:318] 
	I1013 20:59:45.133107    5057 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 20:59:45.133185    5057 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 20:59:45.133257    5057 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 20:59:45.133262    5057 kubeadm.go:318] 
	I1013 20:59:45.133350    5057 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 20:59:45.133431    5057 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 20:59:45.133437    5057 kubeadm.go:318] 
	I1013 20:59:45.133526    5057 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token b7wyk6.mqwpjdody0hqmiej \
	I1013 20:59:45.133634    5057 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 20:59:45.133656    5057 kubeadm.go:318] 	--control-plane 
	I1013 20:59:45.133662    5057 kubeadm.go:318] 
	I1013 20:59:45.133751    5057 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 20:59:45.133756    5057 kubeadm.go:318] 
	I1013 20:59:45.133842    5057 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token b7wyk6.mqwpjdody0hqmiej \
	I1013 20:59:45.133949    5057 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 20:59:45.154864    5057 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 20:59:45.156095    5057 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 20:59:45.156251    5057 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 20:59:45.156280    5057 cni.go:84] Creating CNI manager for ""
	I1013 20:59:45.156289    5057 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:59:45.164740    5057 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 20:59:45.170362    5057 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 20:59:45.183872    5057 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 20:59:45.183894    5057 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 20:59:45.205217    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 20:59:45.611860    5057 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 20:59:45.611989    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:45.612052    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-421494 minikube.k8s.io/updated_at=2025_10_13T20_59_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=addons-421494 minikube.k8s.io/primary=true
	I1013 20:59:45.741785    5057 ops.go:34] apiserver oom_adj: -16
	I1013 20:59:45.741884    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:46.242149    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:46.742694    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:47.242153    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:47.742096    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:48.242904    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:48.742152    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:49.242078    5057 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 20:59:49.335227    5057 kubeadm.go:1113] duration metric: took 3.723284316s to wait for elevateKubeSystemPrivileges
	I1013 20:59:49.335257    5057 kubeadm.go:402] duration metric: took 22.284972143s to StartCluster
	I1013 20:59:49.335280    5057 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:49.335401    5057 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 20:59:49.335745    5057 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:59:49.335954    5057 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 20:59:49.336084    5057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 20:59:49.336298    5057 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 20:59:49.336337    5057 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1013 20:59:49.336418    5057 addons.go:69] Setting yakd=true in profile "addons-421494"
	I1013 20:59:49.336436    5057 addons.go:238] Setting addon yakd=true in "addons-421494"
	I1013 20:59:49.336463    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.336918    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.337256    5057 addons.go:69] Setting inspektor-gadget=true in profile "addons-421494"
	I1013 20:59:49.337277    5057 addons.go:238] Setting addon inspektor-gadget=true in "addons-421494"
	I1013 20:59:49.337324    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.337749    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.339582    5057 addons.go:69] Setting metrics-server=true in profile "addons-421494"
	I1013 20:59:49.339614    5057 addons.go:238] Setting addon metrics-server=true in "addons-421494"
	I1013 20:59:49.339638    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.340084    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.341538    5057 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-421494"
	I1013 20:59:49.341573    5057 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-421494"
	I1013 20:59:49.341611    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.342038    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.352650    5057 addons.go:69] Setting cloud-spanner=true in profile "addons-421494"
	I1013 20:59:49.352676    5057 addons.go:69] Setting registry=true in profile "addons-421494"
	I1013 20:59:49.352697    5057 addons.go:238] Setting addon registry=true in "addons-421494"
	I1013 20:59:49.352706    5057 addons.go:238] Setting addon cloud-spanner=true in "addons-421494"
	I1013 20:59:49.352733    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.352744    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.353196    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.353233    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.367811    5057 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-421494"
	I1013 20:59:49.367975    5057 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-421494"
	I1013 20:59:49.368033    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.368788    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.369050    5057 addons.go:69] Setting registry-creds=true in profile "addons-421494"
	I1013 20:59:49.369115    5057 addons.go:238] Setting addon registry-creds=true in "addons-421494"
	I1013 20:59:49.369190    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.369856    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.372247    5057 addons.go:69] Setting storage-provisioner=true in profile "addons-421494"
	I1013 20:59:49.372312    5057 addons.go:238] Setting addon storage-provisioner=true in "addons-421494"
	I1013 20:59:49.372352    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.372920    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.378775    5057 addons.go:69] Setting default-storageclass=true in profile "addons-421494"
	I1013 20:59:49.378864    5057 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-421494"
	I1013 20:59:49.379508    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.398716    5057 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-421494"
	I1013 20:59:49.398757    5057 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-421494"
	I1013 20:59:49.399713    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.399750    5057 addons.go:69] Setting volcano=true in profile "addons-421494"
	I1013 20:59:49.399770    5057 addons.go:238] Setting addon volcano=true in "addons-421494"
	I1013 20:59:49.399821    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.400250    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.404020    5057 addons.go:69] Setting gcp-auth=true in profile "addons-421494"
	I1013 20:59:49.404052    5057 mustload.go:65] Loading cluster: addons-421494
	I1013 20:59:49.404246    5057 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 20:59:49.404493    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.426549    5057 addons.go:69] Setting ingress=true in profile "addons-421494"
	I1013 20:59:49.426582    5057 addons.go:238] Setting addon ingress=true in "addons-421494"
	I1013 20:59:49.426628    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.427113    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.428349    5057 addons.go:69] Setting volumesnapshots=true in profile "addons-421494"
	I1013 20:59:49.428381    5057 addons.go:238] Setting addon volumesnapshots=true in "addons-421494"
	I1013 20:59:49.428411    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.428854    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.447179    5057 addons.go:69] Setting ingress-dns=true in profile "addons-421494"
	I1013 20:59:49.447211    5057 addons.go:238] Setting addon ingress-dns=true in "addons-421494"
	I1013 20:59:49.447329    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.447914    5057 out.go:179] * Verifying Kubernetes components...
	I1013 20:59:49.352650    5057 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-421494"
	I1013 20:59:49.447990    5057 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-421494"
	I1013 20:59:49.448011    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.448373    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.447923    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.467316    5057 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1013 20:59:49.470186    5057 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1013 20:59:49.470209    5057 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1013 20:59:49.470270    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.486094    5057 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 20:59:49.486271    5057 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1013 20:59:49.493607    5057 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1013 20:59:49.493880    5057 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1013 20:59:49.496288    5057 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 20:59:49.498649    5057 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 20:59:49.498667    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1013 20:59:49.498720    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.504261    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1013 20:59:49.504290    5057 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1013 20:59:49.504347    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.507511    5057 addons.go:238] Setting addon default-storageclass=true in "addons-421494"
	I1013 20:59:49.507625    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.509011    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.511662    5057 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 20:59:49.511687    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 20:59:49.511762    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.496479    5057 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1013 20:59:49.539553    5057 out.go:179]   - Using image docker.io/registry:3.0.0
	I1013 20:59:49.539880    5057 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1013 20:59:49.545581    5057 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 20:59:49.545604    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1013 20:59:49.545669    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.545876    5057 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1013 20:59:49.545898    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1013 20:59:49.545970    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.496484    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1013 20:59:49.496522    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1013 20:59:49.559734    5057 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1013 20:59:49.559860    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.563866    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.599920    5057 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1013 20:59:49.603969    5057 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1013 20:59:49.604036    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1013 20:59:49.604123    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.614717    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1013 20:59:49.620424    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1013 20:59:49.623305    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1013 20:59:49.628726    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1013 20:59:49.633068    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1013 20:59:49.639123    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W1013 20:59:49.642155    5057 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1013 20:59:49.654003    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1013 20:59:49.656677    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1013 20:59:49.656705    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1013 20:59:49.656775    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.684153    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 20:59:49.690189    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1013 20:59:49.701558    5057 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1013 20:59:49.704446    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1013 20:59:49.704467    5057 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1013 20:59:49.704550    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.732581    5057 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1013 20:59:49.732734    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 20:59:49.735760    5057 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1013 20:59:49.736039    5057 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 20:59:49.736070    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1013 20:59:49.736172    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.756466    5057 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 20:59:49.756539    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1013 20:59:49.756617    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.765618    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.766274    5057 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 20:59:49.766291    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1013 20:59:49.766349    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.779119    5057 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 20:59:49.779146    5057 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 20:59:49.779201    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:49.789353    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.798740    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.799479    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.799871    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.825995    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.827577    5057 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-421494"
	I1013 20:59:49.827615    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:49.828144    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:49.841736    5057 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 20:59:49.894299    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.907307    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.927640    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.931896    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.936246    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.958617    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.964895    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:49.972401    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	W1013 20:59:49.980586    5057 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1013 20:59:49.980642    5057 retry.go:31] will retry after 359.598785ms: ssh: handshake failed: EOF
	I1013 20:59:49.999581    5057 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1013 20:59:50.002782    5057 out.go:179]   - Using image docker.io/busybox:stable
	I1013 20:59:50.003402    5057 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 20:59:50.006016    5057 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 20:59:50.006098    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1013 20:59:50.006189    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:50.039903    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:50.503477    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1013 20:59:50.615732    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 20:59:50.687352    5057 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1013 20:59:50.687372    5057 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1013 20:59:50.706176    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1013 20:59:50.706254    5057 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1013 20:59:50.731826    5057 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:50.731896    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1013 20:59:50.742869    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1013 20:59:50.742948    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1013 20:59:50.752677    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1013 20:59:50.789619    5057 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1013 20:59:50.789689    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1013 20:59:50.825217    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1013 20:59:50.835742    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1013 20:59:50.835922    5057 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1013 20:59:50.851258    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:50.882536    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1013 20:59:50.885854    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1013 20:59:50.885929    5057 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1013 20:59:50.891680    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1013 20:59:50.953987    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1013 20:59:50.965194    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1013 20:59:50.965267    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1013 20:59:50.972975    5057 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1013 20:59:50.973042    5057 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1013 20:59:50.997042    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1013 20:59:51.045664    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1013 20:59:51.051166    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1013 20:59:51.051245    5057 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1013 20:59:51.070211    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1013 20:59:51.070287    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1013 20:59:51.102107    5057 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 20:59:51.102181    5057 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1013 20:59:51.250186    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 20:59:51.273234    5057 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1013 20:59:51.273301    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1013 20:59:51.321439    5057 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1013 20:59:51.321516    5057 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1013 20:59:51.360897    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1013 20:59:51.360975    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1013 20:59:51.402728    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1013 20:59:51.427986    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1013 20:59:51.490165    5057 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1013 20:59:51.490191    5057 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1013 20:59:51.562434    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1013 20:59:51.562459    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1013 20:59:51.780433    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1013 20:59:51.780505    5057 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1013 20:59:51.881984    5057 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1013 20:59:51.882063    5057 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1013 20:59:52.063526    5057 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 20:59:52.063601    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1013 20:59:52.149040    5057 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.307260035s)
	I1013 20:59:52.149120    5057 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1013 20:59:52.149597    5057 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.146136986s)
	I1013 20:59:52.150224    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.646658653s)
	I1013 20:59:52.151353    5057 node_ready.go:35] waiting up to 6m0s for node "addons-421494" to be "Ready" ...
	I1013 20:59:52.215766    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1013 20:59:52.215849    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1013 20:59:52.358054    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 20:59:52.395361    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1013 20:59:52.395439    5057 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1013 20:59:52.564835    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1013 20:59:52.564903    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1013 20:59:52.655735    5057 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-421494" context rescaled to 1 replicas
	I1013 20:59:52.777918    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1013 20:59:52.777986    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1013 20:59:52.935474    5057 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 20:59:52.935550    5057 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1013 20:59:53.136334    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1013 20:59:53.882662    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.129907574s)
	I1013 20:59:53.882785    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.057506904s)
	I1013 20:59:53.882826    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.267068811s)
	I1013 20:59:54.236870    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.385536165s)
	W1013 20:59:54.236966    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:54.237002    5057 retry.go:31] will retry after 300.493465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:54.236939    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.35421062s)
	W1013 20:59:54.289675    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 20:59:54.538071    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:55.747693    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.750579999s)
	I1013 20:59:55.747824    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.702083772s)
	I1013 20:59:55.748068    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.497814125s)
	I1013 20:59:55.748253    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.345414679s)
	I1013 20:59:55.748299    5057 addons.go:479] Verifying addon metrics-server=true in "addons-421494"
	I1013 20:59:55.748355    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.320299617s)
	I1013 20:59:55.748570    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.856822066s)
	I1013 20:59:55.748616    5057 addons.go:479] Verifying addon ingress=true in "addons-421494"
	I1013 20:59:55.747637    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.793573239s)
	I1013 20:59:55.748714    5057 addons.go:479] Verifying addon registry=true in "addons-421494"
	I1013 20:59:55.751982    5057 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-421494 service yakd-dashboard -n yakd-dashboard
	
	I1013 20:59:55.752104    5057 out.go:179] * Verifying ingress addon...
	I1013 20:59:55.752164    5057 out.go:179] * Verifying registry addon...
	I1013 20:59:55.757176    5057 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1013 20:59:55.757242    5057 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1013 20:59:55.837413    5057 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 20:59:55.837433    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:55.837636    5057 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1013 20:59:55.837643    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 20:59:55.910348    5057 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1013 20:59:56.050793    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.692641596s)
	W1013 20:59:56.050895    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 20:59:56.050940    5057 retry.go:31] will retry after 313.050139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1013 20:59:56.264264    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:56.264723    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:56.364992    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1013 20:59:56.428797    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.292353629s)
	I1013 20:59:56.428882    5057 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-421494"
	I1013 20:59:56.429178    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.890945312s)
	W1013 20:59:56.429229    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:56.429285    5057 retry.go:31] will retry after 493.833301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:56.431891    5057 out.go:179] * Verifying csi-hostpath-driver addon...
	I1013 20:59:56.435954    5057 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1013 20:59:56.446089    5057 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 20:59:56.446171    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 20:59:56.654207    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 20:59:56.760906    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:56.761352    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:56.923806    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:56.940220    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:57.174860    5057 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1013 20:59:57.174946    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:57.221519    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:57.266196    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:57.266420    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:57.356222    5057 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1013 20:59:57.371917    5057 addons.go:238] Setting addon gcp-auth=true in "addons-421494"
	I1013 20:59:57.371964    5057 host.go:66] Checking if "addons-421494" exists ...
	I1013 20:59:57.372406    5057 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 20:59:57.401392    5057 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1013 20:59:57.401456    5057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 20:59:57.431981    5057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 20:59:57.439911    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:57.761537    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:57.761922    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:57.939851    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:58.260827    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:58.261069    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:58.439007    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 20:59:58.654997    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 20:59:58.763229    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:58.763386    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:58.939498    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:59.043421    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.678336488s)
	I1013 20:59:59.043617    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.119770306s)
	W1013 20:59:59.043657    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:59.043692    5057 retry.go:31] will retry after 825.929978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 20:59:59.043758    5057 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.642344797s)
	I1013 20:59:59.046966    5057 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1013 20:59:59.049723    5057 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1013 20:59:59.052505    5057 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1013 20:59:59.052529    5057 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1013 20:59:59.065182    5057 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1013 20:59:59.065205    5057 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1013 20:59:59.077376    5057 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 20:59:59.077397    5057 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1013 20:59:59.090300    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1013 20:59:59.264975    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:59.265814    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:59.445420    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 20:59:59.571435    5057 addons.go:479] Verifying addon gcp-auth=true in "addons-421494"
	I1013 20:59:59.576301    5057 out.go:179] * Verifying gcp-auth addon...
	I1013 20:59:59.579977    5057 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1013 20:59:59.585481    5057 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1013 20:59:59.585506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 20:59:59.760633    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 20:59:59.760975    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 20:59:59.870361    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 20:59:59.940200    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:00.107483    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:00.287795    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:00.299725    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:00.464917    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:00.599927    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:00.656235    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:00.769962    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:00.774520    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:00.954313    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:01.094964    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:01.262964    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:01.266216    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:01.482571    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:01.598343    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:01.689391    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.818980689s)
	W1013 21:00:01.689499    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:01.689551    5057 retry.go:31] will retry after 532.966944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:01.789336    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:01.789714    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:01.944371    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:02.084731    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:02.223405    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:02.262787    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:02.263439    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:02.440196    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:02.583583    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:02.762130    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:02.763166    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:02.940220    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1013 21:00:03.083991    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:03.084020    5057 retry.go:31] will retry after 1.670594067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:03.086323    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:03.155048    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:03.261458    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:03.261597    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:03.439612    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:03.583558    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:03.761860    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:03.762390    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:03.939654    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:04.083669    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:04.260967    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:04.261159    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:04.439593    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:04.583622    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:04.754852    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:04.765913    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:04.766641    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:04.939993    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:05.083633    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:05.155202    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:05.262206    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:05.269742    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:05.440345    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:05.583137    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:05.626999    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:05.627069    5057 retry.go:31] will retry after 2.48892018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:05.761529    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:05.761779    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:05.939386    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:06.083233    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:06.260659    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:06.261593    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:06.438838    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:06.582569    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:06.761526    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:06.761711    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:06.939912    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:07.083592    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:07.261621    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:07.261750    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:07.439858    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:07.583473    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:07.654173    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:07.760379    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:07.760572    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:07.939553    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:08.083424    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:08.116580    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:08.262759    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:08.263295    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:08.439875    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:08.583310    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:08.761518    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:08.761995    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:00:08.928518    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:08.928551    5057 retry.go:31] will retry after 2.44024344s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:08.939471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:09.083260    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:09.262453    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:09.263120    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:09.439210    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:09.584653    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:09.654524    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:09.761141    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:09.761450    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:09.939118    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:10.083112    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:10.260738    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:10.260949    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:10.439132    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:10.583395    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:10.761676    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:10.761894    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:10.940025    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:11.084208    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:11.261334    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:11.261631    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:11.368986    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:11.440622    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:11.584199    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:11.655417    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:11.761382    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:11.761744    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:11.939790    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:12.083417    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:12.185065    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:12.185147    5057 retry.go:31] will retry after 5.307813202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:12.261267    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:12.261447    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:12.440088    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:12.582888    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:12.760666    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:12.760960    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:12.938735    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:13.083254    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:13.261119    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:13.261397    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:13.439391    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:13.583337    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:13.761640    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:13.761691    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:13.939495    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:14.083861    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:14.154335    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:14.260741    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:14.261335    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:14.439822    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:14.582684    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:14.760485    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:14.760942    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:14.939818    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:15.083175    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:15.261715    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:15.261809    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:15.440164    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:15.583140    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:15.761236    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:15.761424    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:15.939415    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:16.083371    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:16.261277    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:16.261410    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:16.440055    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:16.582740    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:16.654386    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:16.760580    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:16.760731    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:16.939436    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:17.083412    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:17.260905    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:17.261038    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:17.438908    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:17.494012    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:17.583286    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:17.762394    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:17.762959    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:17.939906    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:18.083979    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:18.262133    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:18.262645    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1013 21:00:18.316027    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:18.316057    5057 retry.go:31] will retry after 6.056174972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:18.439232    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:18.583064    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:18.654972    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:18.760953    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:18.761270    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:18.940120    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:19.083173    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:19.263620    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:19.263889    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:19.439597    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:19.583850    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:19.760746    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:19.761116    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:19.939182    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:20.084095    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:20.261396    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:20.261825    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:20.439477    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:20.583210    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:20.761080    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:20.762206    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:20.939089    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:21.083109    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:21.154829    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:21.261505    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:21.261959    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:21.438965    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:21.583048    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:21.760801    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:21.760918    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:21.939423    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:22.083030    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:22.261028    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:22.261203    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:22.439324    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:22.583224    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:22.761195    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:22.761337    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:22.939330    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:23.083151    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:23.260707    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:23.261206    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:23.439812    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:23.583447    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:23.653874    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:23.761416    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:23.761980    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:23.939086    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:24.083089    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:24.261506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:24.261738    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:24.372531    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:24.439514    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:24.583954    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:24.763239    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:24.763560    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:24.939106    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:25.084215    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:25.218565    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:25.218593    5057 retry.go:31] will retry after 6.386486728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:25.260833    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:25.261722    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:25.439654    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:25.583736    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:25.654334    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:25.760842    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:25.761014    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:25.938942    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:26.082895    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:26.260755    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:26.260849    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:26.438797    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:26.583735    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:26.760503    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:26.760846    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:26.938729    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:27.083922    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:27.261279    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:27.261860    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:27.438846    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:27.583646    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:27.654702    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:27.760933    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:27.761361    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:27.939321    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:28.083455    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:28.260962    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:28.261019    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:28.438936    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:28.583740    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:28.761251    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:28.761613    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:28.939322    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:29.083337    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:29.263313    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:29.263408    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:29.439615    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:29.582905    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:29.761185    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:29.761356    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:29.939481    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:30.084361    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1013 21:00:30.154995    5057 node_ready.go:57] node "addons-421494" has "Ready":"False" status (will retry)
	I1013 21:00:30.261567    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:30.261634    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:30.439463    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:30.583503    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:30.760042    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:30.760662    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:30.939740    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:31.083668    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:31.260528    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:31.261023    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:31.469740    5057 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1013 21:00:31.469767    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:31.587269    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:31.605554    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:31.745832    5057 node_ready.go:49] node "addons-421494" is "Ready"
	I1013 21:00:31.745863    5057 node_ready.go:38] duration metric: took 39.594450036s for node "addons-421494" to be "Ready" ...
	I1013 21:00:31.745885    5057 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:00:31.745942    5057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:00:31.781014    5057 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1013 21:00:31.781041    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:31.781392    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:31.945361    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:32.092332    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:32.262545    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:32.262688    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:32.439869    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:32.585554    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:32.764430    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:32.765310    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:32.956283    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:33.083203    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:33.261768    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:33.261866    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:33.288297    5057 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.542322546s)
	I1013 21:00:33.288328    5057 api_server.go:72] duration metric: took 43.952343109s to wait for apiserver process to appear ...
	I1013 21:00:33.288334    5057 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:00:33.288350    5057 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1013 21:00:33.289122    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.683519475s)
	W1013 21:00:33.289171    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:33.289192    5057 retry.go:31] will retry after 9.2630868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:33.296625    5057 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1013 21:00:33.298198    5057 api_server.go:141] control plane version: v1.34.1
	I1013 21:00:33.298227    5057 api_server.go:131] duration metric: took 9.883222ms to wait for apiserver health ...
	I1013 21:00:33.298235    5057 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:00:33.302180    5057 system_pods.go:59] 19 kube-system pods found
	I1013 21:00:33.302212    5057 system_pods.go:61] "coredns-66bc5c9577-zfn57" [2a4119f9-1325-459c-b331-e9e2f946ca94] Running
	I1013 21:00:33.302222    5057 system_pods.go:61] "csi-hostpath-attacher-0" [63ba0966-f0f0-4f2e-a04f-8cc0d6e38857] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:00:33.302229    5057 system_pods.go:61] "csi-hostpath-resizer-0" [412ef547-052e-4b6a-bef2-8a89277fc6cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:00:33.302249    5057 system_pods.go:61] "csi-hostpathplugin-c6mtm" [9179db86-4876-478d-8469-82c3b0a2b7dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:00:33.302259    5057 system_pods.go:61] "etcd-addons-421494" [e0231175-9578-4f4b-bc9c-3219db42e926] Running
	I1013 21:00:33.302264    5057 system_pods.go:61] "kindnet-vz77r" [43fa0e44-0713-4797-b4f0-22127befb175] Running
	I1013 21:00:33.302269    5057 system_pods.go:61] "kube-apiserver-addons-421494" [6bd64ad7-7a1b-4364-a814-c958df98b58d] Running
	I1013 21:00:33.302274    5057 system_pods.go:61] "kube-controller-manager-addons-421494" [21ea2dae-cb9d-4e3d-9bd5-d8d7150998de] Running
	I1013 21:00:33.302286    5057 system_pods.go:61] "kube-ingress-dns-minikube" [f6967331-ef1c-461a-95e8-89133a75c3ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:00:33.302290    5057 system_pods.go:61] "kube-proxy-zrcq6" [cab0a945-0c0d-497f-8ada-c7b45dabc7fa] Running
	I1013 21:00:33.302296    5057 system_pods.go:61] "kube-scheduler-addons-421494" [77f214aa-809f-4322-8c48-b508fe196867] Running
	I1013 21:00:33.302309    5057 system_pods.go:61] "metrics-server-85b7d694d7-hrqb8" [496e3426-b9d3-4219-ba0d-ab73c596e817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:00:33.302324    5057 system_pods.go:61] "nvidia-device-plugin-daemonset-lswkm" [09e2dc90-684c-40b7-ad9c-333959dc27fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:00:33.302337    5057 system_pods.go:61] "registry-66898fdd98-5nbln" [41505187-6ea8-4010-80bf-50e2d38aa5e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:00:33.302345    5057 system_pods.go:61] "registry-creds-764b6fb674-f5gvj" [e6126817-d300-48b3-a682-ebad0a32e077] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:00:33.302352    5057 system_pods.go:61] "registry-proxy-nfn7w" [2d213357-a5ce-4cbc-bcde-d13049d2406e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:00:33.302360    5057 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6cm8c" [5211d95d-039f-4476-a2af-de0bae933a16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.302372    5057 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9phdb" [17c48797-1ee2-46d3-98a1-1b6f33762c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.302378    5057 system_pods.go:61] "storage-provisioner" [681cc811-8bdb-4841-b1f8-3fc44fb6b5c4] Running
	I1013 21:00:33.302393    5057 system_pods.go:74] duration metric: took 4.146533ms to wait for pod list to return data ...
	I1013 21:00:33.302406    5057 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:00:33.306923    5057 default_sa.go:45] found service account: "default"
	I1013 21:00:33.306957    5057 default_sa.go:55] duration metric: took 4.545145ms for default service account to be created ...
	I1013 21:00:33.306967    5057 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:00:33.312303    5057 system_pods.go:86] 19 kube-system pods found
	I1013 21:00:33.312332    5057 system_pods.go:89] "coredns-66bc5c9577-zfn57" [2a4119f9-1325-459c-b331-e9e2f946ca94] Running
	I1013 21:00:33.312350    5057 system_pods.go:89] "csi-hostpath-attacher-0" [63ba0966-f0f0-4f2e-a04f-8cc0d6e38857] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1013 21:00:33.312356    5057 system_pods.go:89] "csi-hostpath-resizer-0" [412ef547-052e-4b6a-bef2-8a89277fc6cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1013 21:00:33.312363    5057 system_pods.go:89] "csi-hostpathplugin-c6mtm" [9179db86-4876-478d-8469-82c3b0a2b7dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1013 21:00:33.312368    5057 system_pods.go:89] "etcd-addons-421494" [e0231175-9578-4f4b-bc9c-3219db42e926] Running
	I1013 21:00:33.312374    5057 system_pods.go:89] "kindnet-vz77r" [43fa0e44-0713-4797-b4f0-22127befb175] Running
	I1013 21:00:33.312382    5057 system_pods.go:89] "kube-apiserver-addons-421494" [6bd64ad7-7a1b-4364-a814-c958df98b58d] Running
	I1013 21:00:33.312386    5057 system_pods.go:89] "kube-controller-manager-addons-421494" [21ea2dae-cb9d-4e3d-9bd5-d8d7150998de] Running
	I1013 21:00:33.312397    5057 system_pods.go:89] "kube-ingress-dns-minikube" [f6967331-ef1c-461a-95e8-89133a75c3ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1013 21:00:33.312402    5057 system_pods.go:89] "kube-proxy-zrcq6" [cab0a945-0c0d-497f-8ada-c7b45dabc7fa] Running
	I1013 21:00:33.312407    5057 system_pods.go:89] "kube-scheduler-addons-421494" [77f214aa-809f-4322-8c48-b508fe196867] Running
	I1013 21:00:33.312425    5057 system_pods.go:89] "metrics-server-85b7d694d7-hrqb8" [496e3426-b9d3-4219-ba0d-ab73c596e817] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1013 21:00:33.312439    5057 system_pods.go:89] "nvidia-device-plugin-daemonset-lswkm" [09e2dc90-684c-40b7-ad9c-333959dc27fa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1013 21:00:33.312445    5057 system_pods.go:89] "registry-66898fdd98-5nbln" [41505187-6ea8-4010-80bf-50e2d38aa5e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1013 21:00:33.312459    5057 system_pods.go:89] "registry-creds-764b6fb674-f5gvj" [e6126817-d300-48b3-a682-ebad0a32e077] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1013 21:00:33.312464    5057 system_pods.go:89] "registry-proxy-nfn7w" [2d213357-a5ce-4cbc-bcde-d13049d2406e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1013 21:00:33.312470    5057 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6cm8c" [5211d95d-039f-4476-a2af-de0bae933a16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.312480    5057 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9phdb" [17c48797-1ee2-46d3-98a1-1b6f33762c7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1013 21:00:33.312495    5057 system_pods.go:89] "storage-provisioner" [681cc811-8bdb-4841-b1f8-3fc44fb6b5c4] Running
	I1013 21:00:33.312503    5057 system_pods.go:126] duration metric: took 5.529621ms to wait for k8s-apps to be running ...
	I1013 21:00:33.312514    5057 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:00:33.312576    5057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:00:33.325544    5057 system_svc.go:56] duration metric: took 13.021443ms WaitForService to wait for kubelet
	I1013 21:00:33.325582    5057 kubeadm.go:586] duration metric: took 43.989595404s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:00:33.325605    5057 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:00:33.328893    5057 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 21:00:33.328936    5057 node_conditions.go:123] node cpu capacity is 2
	I1013 21:00:33.328958    5057 node_conditions.go:105] duration metric: took 3.34771ms to run NodePressure ...
	I1013 21:00:33.328975    5057 start.go:241] waiting for startup goroutines ...
	I1013 21:00:33.440060    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:33.584251    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:33.761258    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:33.761411    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:33.939336    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:34.083476    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:34.261009    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:34.261206    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:34.438966    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:34.582635    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:34.760755    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:34.760813    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:34.940908    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:35.083820    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:35.263224    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:35.263398    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:35.440272    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:35.583079    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:35.762808    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:35.763469    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:35.939866    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:36.083419    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:36.262711    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:36.263198    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:36.442196    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:36.583298    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:36.762680    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:36.763143    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:36.942960    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:37.085138    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:37.263604    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:37.264056    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:37.441664    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:37.585947    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:37.767737    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:37.768149    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:37.940757    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:38.085639    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:38.264261    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:38.264534    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:38.442253    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:38.592352    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:38.762133    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:38.762568    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:38.955845    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:39.082937    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:39.262866    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:39.263276    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:39.439093    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:39.582707    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:39.761367    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:39.761739    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:39.939557    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:40.091157    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:40.269721    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:40.270996    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:40.439413    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:40.583922    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:40.763568    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:40.763718    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:40.947701    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:41.084202    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:41.260480    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:41.261961    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:41.439541    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:41.584301    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:41.761697    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:41.761845    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:41.940200    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:42.084138    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:42.262084    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:42.262335    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:42.439245    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:42.552536    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:42.583606    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:42.762779    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:42.762993    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:42.939770    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:43.083977    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:43.261574    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:43.262125    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:43.438962    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:43.582870    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:43.701605    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149034673s)
	W1013 21:00:43.701637    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:43.701655    5057 retry.go:31] will retry after 11.850607072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:43.761508    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:43.761742    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:43.940547    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:44.083735    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:44.261343    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:44.261460    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:44.440265    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:44.583260    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:44.761533    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:44.761685    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:44.942074    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:45.084952    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:45.267303    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:45.268631    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:45.441702    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:45.583755    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:45.762405    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:45.762490    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:45.941093    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:46.083625    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:46.262474    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:46.262644    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:46.440331    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:46.583497    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:46.762138    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:46.762684    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:46.940496    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:47.083626    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:47.262305    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:47.262695    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:47.440602    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:47.583690    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:47.762755    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:47.763506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:47.940318    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:48.084156    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:48.261405    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:48.261808    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:48.440237    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:48.582849    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:48.761018    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:48.761143    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:48.940438    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:49.083417    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:49.270403    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:49.271169    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:49.440490    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:49.586546    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:49.765860    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:49.766301    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:49.947884    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:50.090789    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:50.261805    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:50.262060    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:50.440147    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:50.583390    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:50.762274    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:50.767429    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:50.939939    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:51.085150    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:51.261883    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:51.262355    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:51.440481    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:51.583298    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:51.761720    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:51.762804    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:51.941817    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:52.088424    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:52.261879    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:52.262223    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:52.439209    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:52.582813    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:52.761888    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:52.762019    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:52.939501    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:53.083683    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:53.261852    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:53.262022    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:53.439325    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:53.594268    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:53.762471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:53.762637    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:53.940700    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:54.083880    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:54.264317    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:54.264434    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:54.439699    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:54.583691    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:54.761009    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:54.761623    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:54.940276    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:55.086999    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:55.262913    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:55.263375    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:55.439840    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:55.553159    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:00:55.583229    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:55.761471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:55.761643    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:55.939775    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:56.083566    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:56.265430    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:56.265528    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:56.439250    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:56.583858    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:56.728163    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.174966934s)
	W1013 21:00:56.728241    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:56.728325    5057 retry.go:31] will retry after 44.855996818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1013 21:00:56.761240    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:56.761766    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:56.940385    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:57.084096    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:57.260793    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:57.261711    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:57.439643    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:57.583528    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:57.764943    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:57.765195    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:57.939448    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:58.083760    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:58.262152    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:58.262261    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:58.439127    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:58.582954    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:58.761917    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:58.762001    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:58.939384    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:59.083890    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:59.264205    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:59.264331    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:59.439641    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:00:59.583845    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:00:59.762280    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:00:59.763003    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:00:59.944178    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:00.093202    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:00.339567    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:00.339943    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:00.442338    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:00.583523    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:00.761483    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:00.762409    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:00.940455    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:01.084077    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:01.272773    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:01.274519    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:01.440073    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:01.582997    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:01.763017    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:01.763165    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:01.939319    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:02.083273    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:02.261479    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:02.261665    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:02.439621    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:02.583549    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:02.761406    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:02.762453    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:02.939770    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:03.084456    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:03.262289    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:03.262840    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:03.441014    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:03.583038    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:03.762628    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:03.763092    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:03.940519    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:04.083638    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:04.262780    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:04.263338    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:04.439948    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:04.585604    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:04.761891    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:04.762426    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:04.940662    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:05.084352    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:05.263040    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:05.263585    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:05.440350    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:05.583269    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:05.761718    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:05.762775    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:05.939968    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:06.083163    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:06.260754    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:06.261538    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:06.440094    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:06.583481    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:06.761324    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:06.761452    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:06.940042    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:07.084422    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:07.262931    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:07.263246    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:07.444358    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:07.583727    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:07.762399    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:07.762593    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:07.939649    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:08.083384    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:08.261192    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:08.262362    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:08.439558    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:08.583288    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:08.761785    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:08.761969    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:08.939473    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:09.084073    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:09.265791    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:09.270576    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:09.440269    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:09.583310    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:09.761426    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:09.761990    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:09.939106    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:10.083671    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:10.261511    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:10.261745    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:10.448659    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:10.584097    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:10.762783    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:10.763114    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:10.940939    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:11.083065    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:11.261537    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:11.261712    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:11.439082    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:11.582795    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:11.761573    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:11.762761    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:11.940421    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:12.083679    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:12.262643    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:12.263442    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:12.440294    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:12.588883    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:12.761710    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:12.761791    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:12.939721    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:13.083814    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:13.264407    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:13.264648    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:13.439695    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:13.583742    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:13.763119    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:13.763282    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:13.939750    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:14.083723    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:14.261347    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:14.261495    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:14.439712    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:14.583902    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:14.761443    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:14.761707    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:14.939909    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:15.085108    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:15.262307    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:15.262471    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:15.439848    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:15.583304    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:15.761840    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:15.762111    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:15.939547    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:16.083532    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:16.261358    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:16.261548    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:16.440469    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:16.583463    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:16.761637    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:16.762909    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1013 21:01:16.941204    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:17.083012    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:17.261142    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:17.261315    5057 kapi.go:107] duration metric: took 1m21.504069547s to wait for kubernetes.io/minikube-addons=registry ...
	I1013 21:01:17.439565    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:17.583502    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:17.760660    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:17.939939    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:18.083878    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:18.262355    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:18.439773    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:18.583899    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:18.761450    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:18.940386    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:19.083541    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:19.265562    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:19.439755    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:19.582963    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:19.761533    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:19.940139    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:20.083720    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:20.261114    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:20.439837    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:20.584286    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:20.760577    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:20.940327    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:21.083306    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:21.260790    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:21.440002    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:21.583058    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:21.761602    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:21.940400    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:22.083669    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:22.261142    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:22.441965    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:22.582972    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:22.761681    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:22.938891    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:23.082902    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:23.262575    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:23.440846    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:23.583805    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:23.760880    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:23.940802    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:24.084053    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:24.261262    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:24.440573    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:24.584981    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:24.761150    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:24.940210    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:25.083941    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1013 21:01:25.261050    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:25.439665    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:25.583936    5057 kapi.go:107] duration metric: took 1m26.003958301s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1013 21:01:25.586965    5057 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-421494 cluster.
	I1013 21:01:25.589996    5057 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1013 21:01:25.592908    5057 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1013 21:01:25.761596    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:25.939709    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:26.261220    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:26.439368    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:26.761723    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:26.940096    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:27.260248    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:27.439630    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:27.760888    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:27.938921    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:28.259940    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:28.439288    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:28.760204    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:28.939212    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:29.262774    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:29.439945    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:29.760759    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:29.939702    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:30.260779    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:30.438923    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:30.760729    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:30.940387    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:31.261491    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:31.440320    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:31.761041    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:31.939249    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:32.260170    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:32.439892    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:32.760363    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:32.940125    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:33.260863    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:33.438865    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:33.761238    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:33.959209    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:34.266138    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:34.439909    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:34.761570    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:34.940331    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:35.260738    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:35.442743    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:35.761271    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:35.940056    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:36.260337    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:36.440506    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:36.760325    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:36.940183    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:37.267571    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:37.440289    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:37.760952    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:37.939800    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:38.261613    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:38.440861    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:38.761379    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:38.940289    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:39.263188    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:39.439238    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:39.761835    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:39.938793    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:40.261193    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:40.438872    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:40.761578    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:40.939900    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:41.263412    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:41.440242    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:41.584524    5057 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1013 21:01:41.762393    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:41.940429    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:42.262242    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:42.440169    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:42.760231    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:42.855620    5057 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.271060813s)
	W1013 21:01:42.855657    5057 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1013 21:01:42.855735    5057 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1013 21:01:42.939970    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:43.262083    5057 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1013 21:01:43.450621    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:43.761974    5057 kapi.go:107] duration metric: took 1m48.004797279s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1013 21:01:43.940517    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:44.440295    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:44.940391    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:45.441364    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:45.940275    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:46.439453    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:46.942852    5057 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1013 21:01:47.440025    5057 kapi.go:107] duration metric: took 1m51.004076946s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1013 21:01:47.441517    5057 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1013 21:01:47.442677    5057 addons.go:514] duration metric: took 1m58.106325975s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1013 21:01:47.442720    5057 start.go:246] waiting for cluster config update ...
	I1013 21:01:47.442742    5057 start.go:255] writing updated cluster config ...
	I1013 21:01:47.443049    5057 ssh_runner.go:195] Run: rm -f paused
	I1013 21:01:47.446548    5057 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:01:47.449724    5057 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zfn57" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.455811    5057 pod_ready.go:94] pod "coredns-66bc5c9577-zfn57" is "Ready"
	I1013 21:01:47.455838    5057 pod_ready.go:86] duration metric: took 6.091487ms for pod "coredns-66bc5c9577-zfn57" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.459430    5057 pod_ready.go:83] waiting for pod "etcd-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.466921    5057 pod_ready.go:94] pod "etcd-addons-421494" is "Ready"
	I1013 21:01:47.466948    5057 pod_ready.go:86] duration metric: took 7.494514ms for pod "etcd-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.472173    5057 pod_ready.go:83] waiting for pod "kube-apiserver-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.476928    5057 pod_ready.go:94] pod "kube-apiserver-addons-421494" is "Ready"
	I1013 21:01:47.476956    5057 pod_ready.go:86] duration metric: took 4.757448ms for pod "kube-apiserver-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.479122    5057 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:47.850502    5057 pod_ready.go:94] pod "kube-controller-manager-addons-421494" is "Ready"
	I1013 21:01:47.850531    5057 pod_ready.go:86] duration metric: took 371.37726ms for pod "kube-controller-manager-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:48.050879    5057 pod_ready.go:83] waiting for pod "kube-proxy-zrcq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:48.451018    5057 pod_ready.go:94] pod "kube-proxy-zrcq6" is "Ready"
	I1013 21:01:48.451047    5057 pod_ready.go:86] duration metric: took 400.09032ms for pod "kube-proxy-zrcq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:48.650177    5057 pod_ready.go:83] waiting for pod "kube-scheduler-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:49.050340    5057 pod_ready.go:94] pod "kube-scheduler-addons-421494" is "Ready"
	I1013 21:01:49.050368    5057 pod_ready.go:86] duration metric: took 400.117412ms for pod "kube-scheduler-addons-421494" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:01:49.050380    5057 pod_ready.go:40] duration metric: took 1.603801539s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:01:49.465802    5057 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 21:01:49.474495    5057 out.go:179] * Done! kubectl is now configured to use "addons-421494" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:02:17 addons-421494 crio[834]: time="2025-10-13T21:02:17.819696096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:02:17 addons-421494 crio[834]: time="2025-10-13T21:02:17.820349364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:02:17 addons-421494 crio[834]: time="2025-10-13T21:02:17.836945064Z" level=info msg="Created container 47cca60471e57e9535b6a98d4adda9960097a569576c6148d20d6e8eea1c26ad: default/test-local-path/busybox" id=6f4784c8-1411-4d8b-afb3-eebb852dc61e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:02:17 addons-421494 crio[834]: time="2025-10-13T21:02:17.840072012Z" level=info msg="Starting container: 47cca60471e57e9535b6a98d4adda9960097a569576c6148d20d6e8eea1c26ad" id=a7b7b83c-4c3a-4210-90f6-8fdb3ad3482c name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:02:17 addons-421494 crio[834]: time="2025-10-13T21:02:17.843658058Z" level=info msg="Started container" PID=5410 containerID=47cca60471e57e9535b6a98d4adda9960097a569576c6148d20d6e8eea1c26ad description=default/test-local-path/busybox id=a7b7b83c-4c3a-4210-90f6-8fdb3ad3482c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f6d496ad3cf5b5767eb2554fe8f8b07e5d12a9d352f033b1119c88f418c6e14d
	Oct 13 21:02:19 addons-421494 crio[834]: time="2025-10-13T21:02:19.48168054Z" level=info msg="Stopping pod sandbox: f6d496ad3cf5b5767eb2554fe8f8b07e5d12a9d352f033b1119c88f418c6e14d" id=1d268746-a9a5-4e30-a190-19159640c483 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:02:19 addons-421494 crio[834]: time="2025-10-13T21:02:19.4824339Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:f6d496ad3cf5b5767eb2554fe8f8b07e5d12a9d352f033b1119c88f418c6e14d UID:3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d NetNS:/var/run/netns/2e953cbc-4aad-42ae-838f-de2c378eac08 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40016e8328}] Aliases:map[]}"
	Oct 13 21:02:19 addons-421494 crio[834]: time="2025-10-13T21:02:19.482592452Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 13 21:02:19 addons-421494 crio[834]: time="2025-10-13T21:02:19.505955906Z" level=info msg="Stopped pod sandbox: f6d496ad3cf5b5767eb2554fe8f8b07e5d12a9d352f033b1119c88f418c6e14d" id=1d268746-a9a5-4e30-a190-19159640c483 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.919089752Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930/POD" id=ecf2dd71-2b51-4125-9dda-482988eb06c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.919158107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.927355406Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930 Namespace:local-path-storage ID:81a531b91fcffbeeeb3184165a9f85d7cef40d4dd30d036ec253eea921cc2f9e UID:a1e98164-a59c-4045-a2a0-befbd21558f2 NetNS:/var/run/netns/4a3dda17-225a-4004-88e4-aedea4e3eb9a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400167ee30}] Aliases:map[]}"
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.927536702Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930 to CNI network \"kindnet\" (type=ptp)"
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.966483869Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930 Namespace:local-path-storage ID:81a531b91fcffbeeeb3184165a9f85d7cef40d4dd30d036ec253eea921cc2f9e UID:a1e98164-a59c-4045-a2a0-befbd21558f2 NetNS:/var/run/netns/4a3dda17-225a-4004-88e4-aedea4e3eb9a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400167ee30}] Aliases:map[]}"
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.966641945Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930 for CNI network kindnet (type=ptp)"
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.97151683Z" level=info msg="Ran pod sandbox 81a531b91fcffbeeeb3184165a9f85d7cef40d4dd30d036ec253eea921cc2f9e with infra container: local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930/POD" id=ecf2dd71-2b51-4125-9dda-482988eb06c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.972745108Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=965b3d59-dd82-4a9d-a251-c4ec5264385e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.976266426Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2c8faa23-b055-4e67-86ca-b31860e25d39 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.98488412Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930/helper-pod" id=3a8b0b7d-f006-4cfd-afb4-84b851d09c7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:02:20 addons-421494 crio[834]: time="2025-10-13T21:02:20.985197877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:02:21 addons-421494 crio[834]: time="2025-10-13T21:02:20.993972122Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:02:21 addons-421494 crio[834]: time="2025-10-13T21:02:20.994530754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:02:21 addons-421494 crio[834]: time="2025-10-13T21:02:21.030898324Z" level=info msg="Created container ce4290d0301e0c035a7a05993a05ca7ee5f5d3d5caf967eb9e8af53bc6540a8d: local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930/helper-pod" id=3a8b0b7d-f006-4cfd-afb4-84b851d09c7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:02:21 addons-421494 crio[834]: time="2025-10-13T21:02:21.034384402Z" level=info msg="Starting container: ce4290d0301e0c035a7a05993a05ca7ee5f5d3d5caf967eb9e8af53bc6540a8d" id=8a0e0ccf-bc41-42b1-9685-93b7a6ed3844 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:02:21 addons-421494 crio[834]: time="2025-10-13T21:02:21.039628139Z" level=info msg="Started container" PID=5554 containerID=ce4290d0301e0c035a7a05993a05ca7ee5f5d3d5caf967eb9e8af53bc6540a8d description=local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930/helper-pod id=8a0e0ccf-bc41-42b1-9685-93b7a6ed3844 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81a531b91fcffbeeeb3184165a9f85d7cef40d4dd30d036ec253eea921cc2f9e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	ce4290d0301e0       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   81a531b91fcff       helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930   local-path-storage
	47cca60471e57       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago        Exited              busybox                                  0                   f6d496ad3cf5b       test-local-path                                              default
	c0fdbf3356257       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          8 seconds ago        Exited              registry-test                            0                   44e443932274f       registry-test                                                default
	c186e226fe1ef       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          28 seconds ago       Running             busybox                                  0                   f4bf60cf7a06d       busybox                                                      default
	ebb997c7d79d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          35 seconds ago       Running             csi-snapshotter                          0                   5899fac2669ce       csi-hostpathplugin-c6mtm                                     kube-system
	0c7386ac64481       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          36 seconds ago       Running             csi-provisioner                          0                   5899fac2669ce       csi-hostpathplugin-c6mtm                                     kube-system
	313e4250764b6       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            38 seconds ago       Running             liveness-probe                           0                   5899fac2669ce       csi-hostpathplugin-c6mtm                                     kube-system
	5231e1c4699c5       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             39 seconds ago       Running             controller                               0                   acb65586c38b4       ingress-nginx-controller-9cc49f96f-bgsnp                     ingress-nginx
	4666db466f3f8       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           45 seconds ago       Running             hostpath                                 0                   5899fac2669ce       csi-hostpathplugin-c6mtm                                     kube-system
	ace4823451e27       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             46 seconds ago       Exited              patch                                    3                   a959df44da424       ingress-nginx-admission-patch-vjwq4                          ingress-nginx
	26d51c5b89e19       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                47 seconds ago       Running             node-driver-registrar                    0                   5899fac2669ce       csi-hostpathplugin-c6mtm                                     kube-system
	7e0e137602daf       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            48 seconds ago       Running             gadget                                   0                   50852bf51bcf7       gadget-lrgtr                                                 gadget
	6c7397833e400       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 57 seconds ago       Running             gcp-auth                                 0                   9f6a9ba826692       gcp-auth-78565c9fb4-vt59h                                    gcp-auth
	cdb0ef6db7620       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   c8ff66f314323       cloud-spanner-emulator-86bd5cbb97-zldmh                      default
	181410ca5fe49       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   ca67a652a7da2       registry-proxy-nfn7w                                         kube-system
	38812285e7c22       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   a03b81729604e       snapshot-controller-7d9fbc56b8-6cm8c                         kube-system
	18ed9fad96827       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           About a minute ago   Running             registry                                 0                   1420fb4c7b171       registry-66898fdd98-5nbln                                    kube-system
	2f837ddcec93c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   8aa3e833d2426       yakd-dashboard-5ff678cb9-fz2dg                               yakd-dashboard
	e469acf690df4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              create                                   0                   2a35515cb5491       ingress-nginx-admission-create-97mkt                         ingress-nginx
	9ba5c620ce249       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   5899fac2669ce       csi-hostpathplugin-c6mtm                                     kube-system
	d970bcf470d76       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   baff4b682a58f       nvidia-device-plugin-daemonset-lswkm                         kube-system
	b5baee2b95e6c       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   63e045eaf2eb1       local-path-provisioner-648f6765c9-w6x97                      local-path-storage
	ba960f407a05a       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   28e9126ec7b8a       csi-hostpath-resizer-0                                       kube-system
	d94a39038ca93       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   0a6f4b6a787fc       kube-ingress-dns-minikube                                    kube-system
	5842ed1dd0727       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   cf4c8347acee7       snapshot-controller-7d9fbc56b8-9phdb                         kube-system
	fa6943addc3e3       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   3f075035cbcf2       metrics-server-85b7d694d7-hrqb8                              kube-system
	af2a904ce7f6b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   4ef516ee13ef7       csi-hostpath-attacher-0                                      kube-system
	24771ab281e11       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   3e46513d7c680       storage-provisioner                                          kube-system
	99d43d6662679       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   f03aa3db10341       coredns-66bc5c9577-zfn57                                     kube-system
	056c2dbfb314d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   ab6f569d07586       kindnet-vz77r                                                kube-system
	99ba07ab68f8f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   65e0a8010aa46       kube-proxy-zrcq6                                             kube-system
	b69697c681afb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   119e677171527       etcd-addons-421494                                           kube-system
	3cef779926c40       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   4b3de415d2da3       kube-controller-manager-addons-421494                        kube-system
	65658d48b6c6a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   f5382ce7e516a       kube-apiserver-addons-421494                                 kube-system
	7eaba707b03a1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   6d881a4e14f60       kube-scheduler-addons-421494                                 kube-system
	
	
	==> coredns [99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8] <==
	[INFO] 10.244.0.7:38652 - 41791 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002179492s
	[INFO] 10.244.0.7:38652 - 38567 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000101905s
	[INFO] 10.244.0.7:38652 - 19488 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000096244s
	[INFO] 10.244.0.7:54239 - 6604 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130664s
	[INFO] 10.244.0.7:54239 - 6391 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179154s
	[INFO] 10.244.0.7:42761 - 30747 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104572s
	[INFO] 10.244.0.7:42761 - 30542 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066386s
	[INFO] 10.244.0.7:41334 - 2764 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082525s
	[INFO] 10.244.0.7:41334 - 2335 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071161s
	[INFO] 10.244.0.7:41934 - 13130 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001847914s
	[INFO] 10.244.0.7:41934 - 13343 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002010257s
	[INFO] 10.244.0.7:41550 - 10640 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127193s
	[INFO] 10.244.0.7:41550 - 10812 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116641s
	[INFO] 10.244.0.19:49606 - 16711 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00069577s
	[INFO] 10.244.0.19:39506 - 21828 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167864s
	[INFO] 10.244.0.19:43523 - 53182 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108026s
	[INFO] 10.244.0.19:33116 - 42110 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138622s
	[INFO] 10.244.0.19:38774 - 7284 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116116s
	[INFO] 10.244.0.19:44516 - 31416 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092929s
	[INFO] 10.244.0.19:49065 - 13340 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002962866s
	[INFO] 10.244.0.19:54874 - 57865 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002840038s
	[INFO] 10.244.0.19:59174 - 32170 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003098682s
	[INFO] 10.244.0.19:56378 - 46432 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001833096s
	[INFO] 10.244.0.23:41842 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000187515s
	[INFO] 10.244.0.23:36305 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148131s
	
	
	==> describe nodes <==
	Name:               addons-421494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-421494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=addons-421494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T20_59_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-421494
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-421494"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 20:59:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-421494
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:02:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:02:18 +0000   Mon, 13 Oct 2025 20:59:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:02:18 +0000   Mon, 13 Oct 2025 20:59:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:02:18 +0000   Mon, 13 Oct 2025 20:59:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:02:18 +0000   Mon, 13 Oct 2025 21:00:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-421494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e8b39055d61497394f2cbb9c0725abf
	  System UUID:                f096a897-2137-4c9b-a2f8-e9d35211479f
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     cloud-spanner-emulator-86bd5cbb97-zldmh                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gadget                      gadget-lrgtr                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gcp-auth                    gcp-auth-78565c9fb4-vt59h                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-bgsnp                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m27s
	  kube-system                 coredns-66bc5c9577-zfn57                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m32s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 csi-hostpathplugin-c6mtm                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 etcd-addons-421494                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m39s
	  kube-system                 kindnet-vz77r                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m33s
	  kube-system                 kube-apiserver-addons-421494                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-controller-manager-addons-421494                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-zrcq6                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-scheduler-addons-421494                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 metrics-server-85b7d694d7-hrqb8                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m28s
	  kube-system                 nvidia-device-plugin-daemonset-lswkm                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 registry-66898fdd98-5nbln                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 registry-creds-764b6fb674-f5gvj                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 registry-proxy-nfn7w                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 snapshot-controller-7d9fbc56b8-6cm8c                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-9phdb                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  local-path-storage          helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-w6x97                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-fz2dg                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m31s                  kube-proxy       
	  Warning  CgroupV1                 2m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m45s (x8 over 2m46s)  kubelet          Node addons-421494 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m45s (x8 over 2m46s)  kubelet          Node addons-421494 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m45s (x8 over 2m46s)  kubelet          Node addons-421494 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s                  kubelet          Node addons-421494 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s                  kubelet          Node addons-421494 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s                  kubelet          Node addons-421494 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m34s                  node-controller  Node addons-421494 event: Registered Node addons-421494 in Controller
	  Normal   NodeReady                111s                   kubelet          Node addons-421494 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015096] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497062] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032757] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.728511] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.553238] kauditd_printk_skb: 36 callbacks suppressed
	[Oct13 20:59] overlayfs: idmapped layers are currently not supported
	[  +0.065201] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad] <==
	{"level":"warn","ts":"2025-10-13T20:59:40.556959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.568902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.592976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.627952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.655707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.665805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.703547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.732884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.761379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.788540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.849797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.851065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.882582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.909305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.925271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:40.988248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:41.006734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:41.047721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:41.144749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:56.689074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T20:59:56.696568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:18.945584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:18.970200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:19.009679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:00:19.026022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58834","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [6c7397833e4002498f6710737345e773d2b044cae6bf0947da0148b393468546] <==
	2025/10/13 21:01:24 GCP Auth Webhook started!
	2025/10/13 21:01:50 Ready to marshal response ...
	2025/10/13 21:01:50 Ready to write response ...
	2025/10/13 21:01:50 Ready to marshal response ...
	2025/10/13 21:01:50 Ready to write response ...
	2025/10/13 21:01:50 Ready to marshal response ...
	2025/10/13 21:01:50 Ready to write response ...
	2025/10/13 21:02:11 Ready to marshal response ...
	2025/10/13 21:02:11 Ready to write response ...
	2025/10/13 21:02:12 Ready to marshal response ...
	2025/10/13 21:02:12 Ready to write response ...
	2025/10/13 21:02:12 Ready to marshal response ...
	2025/10/13 21:02:12 Ready to write response ...
	2025/10/13 21:02:20 Ready to marshal response ...
	2025/10/13 21:02:20 Ready to write response ...
	
	
	==> kernel <==
	 21:02:22 up 44 min,  0 user,  load average: 1.74, 1.18, 0.50
	Linux addons-421494 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570] <==
	E1013 21:00:22.522261       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1013 21:00:30.925537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:00:30.925572       1 main.go:301] handling current node
	I1013 21:00:40.919972       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:00:40.920101       1 main.go:301] handling current node
	I1013 21:00:50.920986       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:00:50.921012       1 main.go:301] handling current node
	I1013 21:01:00.923534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:01:00.923579       1 main.go:301] handling current node
	I1013 21:01:10.920934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:01:10.920962       1 main.go:301] handling current node
	I1013 21:01:20.920957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:01:20.921029       1 main.go:301] handling current node
	I1013 21:01:30.919938       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:01:30.919971       1 main.go:301] handling current node
	I1013 21:01:40.920942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:01:40.921006       1 main.go:301] handling current node
	I1013 21:01:50.920278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:01:50.920311       1 main.go:301] handling current node
	I1013 21:02:00.919933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:02:00.919973       1 main.go:301] handling current node
	I1013 21:02:10.922365       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:02:10.922400       1 main.go:301] handling current node
	I1013 21:02:20.919912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:02:20.919943       1 main.go:301] handling current node
	
	
	==> kube-apiserver [65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b] <==
	I1013 20:59:56.260676       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1013 20:59:56.379006       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.80.67"}
	W1013 20:59:56.677934       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 20:59:56.692215       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1013 20:59:59.458034       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.98.136.212"}
	W1013 21:00:18.945473       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:00:18.959956       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1013 21:00:19.009429       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:00:19.025848       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1013 21:00:31.349123       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.136.212:443: connect: connection refused
	E1013 21:00:31.349416       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.136.212:443: connect: connection refused" logger="UnhandledError"
	W1013 21:00:31.350066       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.136.212:443: connect: connection refused
	E1013 21:00:31.350085       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.136.212:443: connect: connection refused" logger="UnhandledError"
	W1013 21:00:31.415564       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.136.212:443: connect: connection refused
	E1013 21:00:31.415660       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.136.212:443: connect: connection refused" logger="UnhandledError"
	E1013 21:00:38.942806       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.140.108:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.140.108:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.140.108:443: connect: connection refused" logger="UnhandledError"
	W1013 21:00:38.944478       1 handler_proxy.go:99] no RequestInfo found in the context
	E1013 21:00:38.944596       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1013 21:00:38.980416       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1013 21:00:39.064159       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1013 21:02:00.006478       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43514: use of closed network connection
	E1013 21:02:00.662812       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43556: use of closed network connection
	
	
	==> kube-controller-manager [3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1] <==
	I1013 20:59:48.962854       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 20:59:48.962893       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 20:59:48.962922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 20:59:48.962961       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 20:59:48.963029       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 20:59:48.964228       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 20:59:48.964309       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 20:59:48.964321       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 20:59:48.964330       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 20:59:48.964987       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 20:59:48.971563       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 20:59:48.974693       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 20:59:48.978020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 20:59:48.978044       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 20:59:48.978053       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 20:59:48.984194       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 20:59:48.989926       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1013 21:00:18.938337       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1013 21:00:18.938486       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1013 21:00:18.938542       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1013 21:00:18.997072       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1013 21:00:19.001285       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1013 21:00:19.039337       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:00:19.102494       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:00:33.972340       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3] <==
	I1013 20:59:50.724267       1 server_linux.go:53] "Using iptables proxy"
	I1013 20:59:50.801439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 20:59:50.901763       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 20:59:50.901792       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 20:59:50.901864       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 20:59:50.952117       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 20:59:50.952172       1 server_linux.go:132] "Using iptables Proxier"
	I1013 20:59:50.962407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 20:59:50.974937       1 server.go:527] "Version info" version="v1.34.1"
	I1013 20:59:50.974973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 20:59:50.976510       1 config.go:200] "Starting service config controller"
	I1013 20:59:50.976520       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 20:59:50.976537       1 config.go:106] "Starting endpoint slice config controller"
	I1013 20:59:50.976541       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 20:59:50.976560       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 20:59:50.976565       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 20:59:50.977196       1 config.go:309] "Starting node config controller"
	I1013 20:59:50.977203       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 20:59:50.977209       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 20:59:51.077634       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 20:59:51.077654       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 20:59:51.077666       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb] <==
	I1013 20:59:43.044295       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 20:59:43.046684       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 20:59:43.047105       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 20:59:43.047264       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 20:59:43.047130       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 20:59:43.060526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 20:59:43.060723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 20:59:43.060802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 20:59:43.060870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 20:59:43.060976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 20:59:43.061068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 20:59:43.061211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 20:59:43.061364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 20:59:43.062058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 20:59:43.062111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 20:59:43.061996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 20:59:43.063903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 20:59:43.063939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 20:59:43.064008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 20:59:43.064168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 20:59:43.064179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 20:59:43.064225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 20:59:43.061635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 20:59:43.061729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1013 20:59:44.648313       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:02:19 addons-421494 kubelet[1290]: I1013 21:02:19.642514    1290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l6jk5\" (UniqueName: \"kubernetes.io/projected/3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d-kube-api-access-l6jk5\") on node \"addons-421494\" DevicePath \"\""
	Oct 13 21:02:19 addons-421494 kubelet[1290]: I1013 21:02:19.642558    1290 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d-gcp-creds\") on node \"addons-421494\" DevicePath \"\""
	Oct 13 21:02:19 addons-421494 kubelet[1290]: I1013 21:02:19.642571    1290 reconciler_common.go:299] "Volume detached for volume \"pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930\" (UniqueName: \"kubernetes.io/host-path/3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930\") on node \"addons-421494\" DevicePath \"\""
	Oct 13 21:02:20 addons-421494 kubelet[1290]: I1013 21:02:20.487278    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d496ad3cf5b5767eb2554fe8f8b07e5d12a9d352f033b1119c88f418c6e14d"
	Oct 13 21:02:20 addons-421494 kubelet[1290]: E1013 21:02:20.489517    1290 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-421494\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-421494' and this object" podUID="3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d" pod="default/test-local-path"
	Oct 13 21:02:20 addons-421494 kubelet[1290]: E1013 21:02:20.642489    1290 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-421494\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-421494' and this object" podUID="3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d" pod="default/test-local-path"
	Oct 13 21:02:20 addons-421494 kubelet[1290]: I1013 21:02:20.651138    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mt6l\" (UniqueName: \"kubernetes.io/projected/a1e98164-a59c-4045-a2a0-befbd21558f2-kube-api-access-5mt6l\") pod \"helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") " pod="local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930"
	Oct 13 21:02:20 addons-421494 kubelet[1290]: I1013 21:02:20.651434    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-gcp-creds\") pod \"helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") " pod="local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930"
	Oct 13 21:02:20 addons-421494 kubelet[1290]: I1013 21:02:20.651603    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-data\") pod \"helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") " pod="local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930"
	Oct 13 21:02:20 addons-421494 kubelet[1290]: I1013 21:02:20.651925    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a1e98164-a59c-4045-a2a0-befbd21558f2-script\") pod \"helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") " pod="local-path-storage/helper-pod-delete-pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930"
	Oct 13 21:02:20 addons-421494 kubelet[1290]: W1013 21:02:20.969638    1290 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1c1825622e98f9a9fb3c72e6860c723048ac7a6e801dcc40c454272c1bcfd512/crio-81a531b91fcffbeeeb3184165a9f85d7cef40d4dd30d036ec253eea921cc2f9e WatchSource:0}: Error finding container 81a531b91fcffbeeeb3184165a9f85d7cef40d4dd30d036ec253eea921cc2f9e: Status 404 returned error can't find the container with id 81a531b91fcffbeeeb3184165a9f85d7cef40d4dd30d036ec253eea921cc2f9e
	Oct 13 21:02:21 addons-421494 kubelet[1290]: E1013 21:02:21.496881    1290 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-421494\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-421494' and this object" podUID="3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d" pod="default/test-local-path"
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.424832    1290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d" path="/var/lib/kubelet/pods/3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d/volumes"
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.578851    1290 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a1e98164-a59c-4045-a2a0-befbd21558f2-script\") pod \"a1e98164-a59c-4045-a2a0-befbd21558f2\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") "
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.578906    1290 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-gcp-creds\") pod \"a1e98164-a59c-4045-a2a0-befbd21558f2\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") "
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.578936    1290 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mt6l\" (UniqueName: \"kubernetes.io/projected/a1e98164-a59c-4045-a2a0-befbd21558f2-kube-api-access-5mt6l\") pod \"a1e98164-a59c-4045-a2a0-befbd21558f2\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") "
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.578961    1290 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-data\") pod \"a1e98164-a59c-4045-a2a0-befbd21558f2\" (UID: \"a1e98164-a59c-4045-a2a0-befbd21558f2\") "
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.579109    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-data" (OuterVolumeSpecName: "data") pod "a1e98164-a59c-4045-a2a0-befbd21558f2" (UID: "a1e98164-a59c-4045-a2a0-befbd21558f2"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.579456    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e98164-a59c-4045-a2a0-befbd21558f2-script" (OuterVolumeSpecName: "script") pod "a1e98164-a59c-4045-a2a0-befbd21558f2" (UID: "a1e98164-a59c-4045-a2a0-befbd21558f2"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.579486    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a1e98164-a59c-4045-a2a0-befbd21558f2" (UID: "a1e98164-a59c-4045-a2a0-befbd21558f2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.585862    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e98164-a59c-4045-a2a0-befbd21558f2-kube-api-access-5mt6l" (OuterVolumeSpecName: "kube-api-access-5mt6l") pod "a1e98164-a59c-4045-a2a0-befbd21558f2" (UID: "a1e98164-a59c-4045-a2a0-befbd21558f2"). InnerVolumeSpecName "kube-api-access-5mt6l". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.680275    1290 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a1e98164-a59c-4045-a2a0-befbd21558f2-script\") on node \"addons-421494\" DevicePath \"\""
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.680307    1290 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-gcp-creds\") on node \"addons-421494\" DevicePath \"\""
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.680318    1290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mt6l\" (UniqueName: \"kubernetes.io/projected/a1e98164-a59c-4045-a2a0-befbd21558f2-kube-api-access-5mt6l\") on node \"addons-421494\" DevicePath \"\""
	Oct 13 21:02:22 addons-421494 kubelet[1290]: I1013 21:02:22.680328    1290 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a1e98164-a59c-4045-a2a0-befbd21558f2-data\") on node \"addons-421494\" DevicePath \"\""
	
	
	==> storage-provisioner [24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7] <==
	W1013 21:01:57.303940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:01:59.306661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:01:59.310933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:01.314203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:01.322559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:03.325707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:03.330250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:05.332885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:05.339959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:07.342618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:07.347149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:09.350559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:09.358868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:11.361811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:11.368565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:13.372842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:13.376980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:15.380651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:15.384576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:17.388076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:17.395827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:19.398978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:19.405614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:21.408721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:02:21.413157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-421494 -n addons-421494
helpers_test.go:269: (dbg) Run:  kubectl --context addons-421494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4 registry-creds-764b6fb674-f5gvj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-421494 describe pod ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4 registry-creds-764b6fb674-f5gvj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-421494 describe pod ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4 registry-creds-764b6fb674-f5gvj: exit status 1 (95.03796ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-97mkt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vjwq4" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-f5gvj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-421494 describe pod ingress-nginx-admission-create-97mkt ingress-nginx-admission-patch-vjwq4 registry-creds-764b6fb674-f5gvj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable headlamp --alsologtostderr -v=1: exit status 11 (249.789706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:23.698012   12449 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:23.698244   12449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:23.698278   12449 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:23.698306   12449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:23.699298   12449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:23.699669   12449 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:23.700071   12449 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:23.700091   12449 addons.go:606] checking whether the cluster is paused
	I1013 21:02:23.700227   12449 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:23.700251   12449 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:23.700747   12449 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:23.718618   12449 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:23.718676   12449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:23.736510   12449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:23.842118   12449 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:23.842243   12449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:23.869913   12449 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:23.869932   12449 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:23.869937   12449 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:23.869940   12449 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:23.869944   12449 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:23.869948   12449 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:23.869952   12449 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:23.869955   12449 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:23.869958   12449 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:23.869966   12449 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:23.869969   12449 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:23.869972   12449 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:23.869975   12449 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:23.869979   12449 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:23.869982   12449 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:23.869989   12449 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:23.869996   12449 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:23.870000   12449 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:23.870004   12449 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:23.870007   12449 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:23.870012   12449 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:23.870018   12449 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:23.870021   12449 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:23.870024   12449 cri.go:89] found id: ""
	I1013 21:02:23.870079   12449 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:23.886352   12449 out.go:203] 
	W1013 21:02:23.889208   12449 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:23.889237   12449 out.go:285] * 
	* 
	W1013 21:02:23.893919   12449 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:23.896756   12449 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-zldmh" [16903902-5574-4e1e-a05c-f021c3b4d269] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004056132s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (339.973004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:20.036277   11762 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:20.036553   11762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:20.037077   11762 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:20.037133   11762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:20.038222   11762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:20.038814   11762 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:20.039275   11762 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:20.039333   11762 addons.go:606] checking whether the cluster is paused
	I1013 21:02:20.039471   11762 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:20.039503   11762 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:20.040036   11762 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:20.064947   11762 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:20.064999   11762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:20.118264   11762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:20.239276   11762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:20.239376   11762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:20.270767   11762 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:20.270786   11762 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:20.270791   11762 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:20.270795   11762 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:20.270798   11762 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:20.270802   11762 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:20.270806   11762 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:20.270809   11762 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:20.270813   11762 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:20.270820   11762 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:20.270824   11762 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:20.270827   11762 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:20.270830   11762 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:20.270833   11762 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:20.270836   11762 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:20.270844   11762 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:20.270847   11762 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:20.270852   11762 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:20.270855   11762 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:20.270859   11762 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:20.270863   11762 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:20.270867   11762 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:20.270870   11762 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:20.270873   11762 cri.go:89] found id: ""
	I1013 21:02:20.270921   11762 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:20.287681   11762 out.go:203] 
	W1013 21:02:20.290854   11762 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:20.290953   11762 out.go:285] * 
	* 
	W1013 21:02:20.295938   11762 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_ssh_3550c37ada75a0e7a3e4824ad4683f6603bdaa9e_0.log                     │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_ssh_3550c37ada75a0e7a3e4824ad4683f6603bdaa9e_0.log                     │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:20.300938   11762 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-421494 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-421494 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-421494 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [3ac8a025-dfcf-419f-8e9d-0611c8f6cb9d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003706044s
addons_test.go:967: (dbg) Run:  kubectl --context addons-421494 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 ssh "cat /opt/local-path-provisioner/pvc-c6a933ea-f6fd-4814-a9cc-d489c3561930_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-421494 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-421494 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (408.697715ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:20.714739   11927 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:20.714955   11927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:20.714967   11927 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:20.714973   11927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:20.715261   11927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:20.715611   11927 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:20.716019   11927 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:20.716036   11927 addons.go:606] checking whether the cluster is paused
	I1013 21:02:20.716147   11927 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:20.716161   11927 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:20.716587   11927 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:20.743975   11927 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:20.744033   11927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:20.791881   11927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:20.906357   11927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:20.906441   11927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:20.964841   11927 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:20.964860   11927 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:20.964864   11927 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:20.964868   11927 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:20.964872   11927 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:20.964880   11927 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:20.964883   11927 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:20.964887   11927 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:20.964890   11927 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:20.964896   11927 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:20.964899   11927 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:20.964902   11927 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:20.964905   11927 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:20.964910   11927 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:20.964913   11927 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:20.964918   11927 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:20.964921   11927 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:20.964924   11927 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:20.964927   11927 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:20.964930   11927 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:20.964935   11927 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:20.964938   11927 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:20.964940   11927 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:20.964943   11927 cri.go:89] found id: ""
	I1013 21:02:20.965001   11927 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:21.028770   11927 out.go:203] 
	W1013 21:02:21.032825   11927 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:21.032851   11927 out.go:285] * 
	* 
	W1013 21:02:21.038367   11927 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:21.041471   11927 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-lswkm" [09e2dc90-684c-40b7-ad9c-333959dc27fa] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003624027s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (263.365111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:12.231030   11411 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:12.231226   11411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:12.231238   11411 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:12.231244   11411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:12.231634   11411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:12.232027   11411 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:12.233160   11411 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:12.233217   11411 addons.go:606] checking whether the cluster is paused
	I1013 21:02:12.233375   11411 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:12.233417   11411 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:12.233914   11411 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:12.251643   11411 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:12.251704   11411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:12.272781   11411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:12.378135   11411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:12.378216   11411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:12.410567   11411 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:12.410590   11411 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:12.410595   11411 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:12.410601   11411 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:12.410604   11411 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:12.410608   11411 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:12.410611   11411 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:12.410614   11411 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:12.410617   11411 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:12.410623   11411 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:12.410626   11411 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:12.410629   11411 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:12.410633   11411 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:12.410636   11411 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:12.410639   11411 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:12.410644   11411 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:12.410655   11411 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:12.410660   11411 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:12.410663   11411 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:12.410666   11411 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:12.410672   11411 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:12.410675   11411 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:12.410678   11411 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:12.410681   11411 cri.go:89] found id: ""
	I1013 21:02:12.410730   11411 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:12.428859   11411 out.go:203] 
	W1013 21:02:12.431978   11411 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:12.432003   11411 out.go:285] * 
	* 
	W1013 21:02:12.436641   11411 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:12.439552   11411 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-fz2dg" [3041b27c-3070-4ba4-992a-0e8c2fff1d53] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002761925s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-421494 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-421494 addons disable yakd --alsologtostderr -v=1: exit status 11 (256.810889ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:02:06.968637   11320 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:02:06.968864   11320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:06.968879   11320 out.go:374] Setting ErrFile to fd 2...
	I1013 21:02:06.968885   11320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:02:06.969172   11320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:02:06.969497   11320 mustload.go:65] Loading cluster: addons-421494
	I1013 21:02:06.969881   11320 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:06.969905   11320 addons.go:606] checking whether the cluster is paused
	I1013 21:02:06.970040   11320 config.go:182] Loaded profile config "addons-421494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:02:06.970070   11320 host.go:66] Checking if "addons-421494" exists ...
	I1013 21:02:06.970544   11320 cli_runner.go:164] Run: docker container inspect addons-421494 --format={{.State.Status}}
	I1013 21:02:06.989098   11320 ssh_runner.go:195] Run: systemctl --version
	I1013 21:02:06.989153   11320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-421494
	I1013 21:02:07.008717   11320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/addons-421494/id_rsa Username:docker}
	I1013 21:02:07.110120   11320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:02:07.110206   11320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:02:07.143341   11320 cri.go:89] found id: "ebb997c7d79d583428b1355de2046886a326ae3c3e20f70bbe1ae0f9e6703f7f"
	I1013 21:02:07.143369   11320 cri.go:89] found id: "0c7386ac64481c921e58e89cc194fda8203de7ead964013cac7400057edd284b"
	I1013 21:02:07.143373   11320 cri.go:89] found id: "313e4250764b6b9ca250b085946faed4744d1dc6516ddd2c7da718d0652717f3"
	I1013 21:02:07.143377   11320 cri.go:89] found id: "4666db466f3f81ef2ee14c4d6b8e30164f55fc982ba337c36b3afda038eb1963"
	I1013 21:02:07.143381   11320 cri.go:89] found id: "26d51c5b89e1953bc71b605eafac087308c86278491ae0cac3f48f2a37104464"
	I1013 21:02:07.143384   11320 cri.go:89] found id: "181410ca5fe49d39df6815ad7e630167615f63fc2ac2ea29759d347e79cf62cb"
	I1013 21:02:07.143387   11320 cri.go:89] found id: "38812285e7c22cd5e7853a2d043e5969e7e9e46a305880956354a8717537af6b"
	I1013 21:02:07.143390   11320 cri.go:89] found id: "18ed9fad968275ac6372a8352ae2ebed473b24a41d01a533887860e8f4567b60"
	I1013 21:02:07.143393   11320 cri.go:89] found id: "9ba5c620ce249031919bf6e32638f8dc691e315b015d289385f70209d9e74ffd"
	I1013 21:02:07.143403   11320 cri.go:89] found id: "d970bcf470d76f206503134fe466e51e7976bbfa6b4a2e3fe3625f80149dfc31"
	I1013 21:02:07.143407   11320 cri.go:89] found id: "ba960f407a05af3ebfd3183ad26dc7b344cd8382fa28adf90d68c4f98db5420a"
	I1013 21:02:07.143410   11320 cri.go:89] found id: "d94a39038ca9352797188400141068b55b9094aa8d0f51d361b0e2d6590817cb"
	I1013 21:02:07.143413   11320 cri.go:89] found id: "5842ed1dd0727229088f521445604b2c1a71d16ca6035743c549feb0f0139a21"
	I1013 21:02:07.143417   11320 cri.go:89] found id: "fa6943addc3e3fa4467c9e16c42f411b5ae91ed87ff413a74875379b524422bf"
	I1013 21:02:07.143419   11320 cri.go:89] found id: "af2a904ce7f6b934880d46e7cf2b5afbeb8c28d04de3438f7f4ce62dc8173941"
	I1013 21:02:07.143427   11320 cri.go:89] found id: "24771ab281e111109e2945e2f4112c4fb92daca6a6eb93304fbc65748bee14e7"
	I1013 21:02:07.143431   11320 cri.go:89] found id: "99d43d6662679bda28129b2b95cba7f724d95042cc2bd6f3c957a1e2ba16b5d8"
	I1013 21:02:07.143435   11320 cri.go:89] found id: "056c2dbfb314d220189391293f077c99025846b4b9f34abe273b798f61317570"
	I1013 21:02:07.143438   11320 cri.go:89] found id: "99ba07ab68f8f8928330f6da7154c1fa9e2a9c5025906f287d474fc44f71bcd3"
	I1013 21:02:07.143441   11320 cri.go:89] found id: "b69697c681afb5d9720692f76f4c7fb1f08cbb53f5d5c7219d2ecab1e81e51ad"
	I1013 21:02:07.143447   11320 cri.go:89] found id: "3cef779926c40efd23a69e4a3f37a0bddcdaf08e16cbb526f1d13502db7a95a1"
	I1013 21:02:07.143450   11320 cri.go:89] found id: "65658d48b6c6a4b767ffad937ef4f74467a3d49eb81ae57813596987defa754b"
	I1013 21:02:07.143452   11320 cri.go:89] found id: "7eaba707b03a138f0291c2f0905cf2c10e0c0e7a7d56a206cb7266035a7280bb"
	I1013 21:02:07.143455   11320 cri.go:89] found id: ""
	I1013 21:02:07.143506   11320 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:02:07.159985   11320 out.go:203] 
	W1013 21:02:07.163447   11320 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:02:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:02:07.163471   11320 out.go:285] * 
	* 
	W1013 21:02:07.168324   11320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:02:07.171165   11320 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-421494 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestForceSystemdFlag (517.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-257205 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1013 21:53:51.249267    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-257205 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m33.659827174s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-257205] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-257205" primary control-plane node in "force-systemd-flag-257205" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:53:37.109687  162695 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:53:37.109798  162695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:53:37.109808  162695 out.go:374] Setting ErrFile to fd 2...
	I1013 21:53:37.109813  162695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:53:37.110083  162695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:53:37.110535  162695 out.go:368] Setting JSON to false
	I1013 21:53:37.111487  162695 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5752,"bootTime":1760386666,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:53:37.111559  162695 start.go:141] virtualization:  
	I1013 21:53:37.115874  162695 out.go:179] * [force-systemd-flag-257205] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:53:37.119589  162695 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:53:37.119633  162695 notify.go:220] Checking for updates...
	I1013 21:53:37.126215  162695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:53:37.129330  162695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:53:37.132417  162695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:53:37.135370  162695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:53:37.138414  162695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:53:37.142214  162695 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:53:37.142321  162695 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:53:37.184941  162695 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:53:37.185056  162695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:53:37.282449  162695 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:53:37.271156689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:53:37.282551  162695 docker.go:318] overlay module found
	I1013 21:53:37.285912  162695 out.go:179] * Using the docker driver based on user configuration
	I1013 21:53:37.288907  162695 start.go:305] selected driver: docker
	I1013 21:53:37.288944  162695 start.go:925] validating driver "docker" against <nil>
	I1013 21:53:37.288968  162695 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:53:37.289715  162695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:53:37.348461  162695 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:53:37.338093927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:53:37.348621  162695 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:53:37.348851  162695 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 21:53:37.351891  162695 out.go:179] * Using Docker driver with root privileges
	I1013 21:53:37.354806  162695 cni.go:84] Creating CNI manager for ""
	I1013 21:53:37.354868  162695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:53:37.354880  162695 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 21:53:37.354965  162695 start.go:349] cluster config:
	{Name:force-systemd-flag-257205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-257205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:53:37.358095  162695 out.go:179] * Starting "force-systemd-flag-257205" primary control-plane node in "force-systemd-flag-257205" cluster
	I1013 21:53:37.360996  162695 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:53:37.363992  162695 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 21:53:37.366941  162695 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:53:37.366995  162695 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 21:53:37.367004  162695 cache.go:58] Caching tarball of preloaded images
	I1013 21:53:37.367087  162695 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 21:53:37.367096  162695 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:53:37.367204  162695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/config.json ...
	I1013 21:53:37.367221  162695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/config.json: {Name:mk787b8d665b9a5e5766aef0b33001a85a387305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:37.367456  162695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 21:53:37.389450  162695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 21:53:37.389473  162695 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 21:53:37.389502  162695 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:53:37.389524  162695 start.go:360] acquireMachinesLock for force-systemd-flag-257205: {Name:mkd418c10ee9a694aac1948d5b68060345c4e881 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:53:37.389640  162695 start.go:364] duration metric: took 95.661µs to acquireMachinesLock for "force-systemd-flag-257205"
	I1013 21:53:37.389674  162695 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-257205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-257205 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:53:37.389739  162695 start.go:125] createHost starting for "" (driver="docker")
	I1013 21:53:37.394861  162695 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 21:53:37.395088  162695 start.go:159] libmachine.API.Create for "force-systemd-flag-257205" (driver="docker")
	I1013 21:53:37.395137  162695 client.go:168] LocalClient.Create starting
	I1013 21:53:37.395206  162695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 21:53:37.395249  162695 main.go:141] libmachine: Decoding PEM data...
	I1013 21:53:37.395266  162695 main.go:141] libmachine: Parsing certificate...
	I1013 21:53:37.395323  162695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 21:53:37.395344  162695 main.go:141] libmachine: Decoding PEM data...
	I1013 21:53:37.395367  162695 main.go:141] libmachine: Parsing certificate...
	I1013 21:53:37.395727  162695 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257205 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 21:53:37.410414  162695 cli_runner.go:211] docker network inspect force-systemd-flag-257205 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 21:53:37.410486  162695 network_create.go:284] running [docker network inspect force-systemd-flag-257205] to gather additional debugging logs...
	I1013 21:53:37.410506  162695 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257205
	W1013 21:53:37.428470  162695 cli_runner.go:211] docker network inspect force-systemd-flag-257205 returned with exit code 1
	I1013 21:53:37.428502  162695 network_create.go:287] error running [docker network inspect force-systemd-flag-257205]: docker network inspect force-systemd-flag-257205: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-257205 not found
	I1013 21:53:37.428516  162695 network_create.go:289] output of [docker network inspect force-systemd-flag-257205]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-257205 not found
	
	** /stderr **
	I1013 21:53:37.428680  162695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:53:37.444687  162695 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 21:53:37.444985  162695 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 21:53:37.445274  162695 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 21:53:37.445672  162695 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5de0}
	I1013 21:53:37.445692  162695 network_create.go:124] attempt to create docker network force-systemd-flag-257205 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 21:53:37.445748  162695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-257205 force-systemd-flag-257205
	I1013 21:53:37.504261  162695 network_create.go:108] docker network force-systemd-flag-257205 192.168.76.0/24 created
	I1013 21:53:37.504293  162695 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-257205" container
	I1013 21:53:37.504368  162695 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 21:53:37.524597  162695 cli_runner.go:164] Run: docker volume create force-systemd-flag-257205 --label name.minikube.sigs.k8s.io=force-systemd-flag-257205 --label created_by.minikube.sigs.k8s.io=true
	I1013 21:53:37.543356  162695 oci.go:103] Successfully created a docker volume force-systemd-flag-257205
	I1013 21:53:37.543446  162695 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-257205-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257205 --entrypoint /usr/bin/test -v force-systemd-flag-257205:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 21:53:38.122700  162695 oci.go:107] Successfully prepared a docker volume force-systemd-flag-257205
	I1013 21:53:38.122749  162695 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:53:38.122767  162695 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 21:53:38.122829  162695 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257205:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 21:53:42.916164  162695 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-257205:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.793301748s)
	I1013 21:53:42.916195  162695 kic.go:203] duration metric: took 4.793424034s to extract preloaded images to volume ...
	W1013 21:53:42.916340  162695 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 21:53:42.916444  162695 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 21:53:42.974695  162695 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-257205 --name force-systemd-flag-257205 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-257205 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-257205 --network force-systemd-flag-257205 --ip 192.168.76.2 --volume force-systemd-flag-257205:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 21:53:43.277533  162695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257205 --format={{.State.Running}}
	I1013 21:53:43.298838  162695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257205 --format={{.State.Status}}
	I1013 21:53:43.322213  162695 cli_runner.go:164] Run: docker exec force-systemd-flag-257205 stat /var/lib/dpkg/alternatives/iptables
	I1013 21:53:43.388547  162695 oci.go:144] the created container "force-systemd-flag-257205" has a running status.
	I1013 21:53:43.388583  162695 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa...
	I1013 21:53:44.104903  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1013 21:53:44.104991  162695 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 21:53:44.125310  162695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257205 --format={{.State.Status}}
	I1013 21:53:44.141221  162695 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 21:53:44.141256  162695 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-257205 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 21:53:44.181919  162695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-257205 --format={{.State.Status}}
	I1013 21:53:44.199196  162695 machine.go:93] provisionDockerMachine start ...
	I1013 21:53:44.199293  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:44.221624  162695 main.go:141] libmachine: Using SSH client type: native
	I1013 21:53:44.222002  162695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33031 <nil> <nil>}
	I1013 21:53:44.222020  162695 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:53:44.222624  162695 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 21:53:47.371474  162695 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257205
	
	I1013 21:53:47.371495  162695 ubuntu.go:182] provisioning hostname "force-systemd-flag-257205"
	I1013 21:53:47.371557  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:47.390147  162695 main.go:141] libmachine: Using SSH client type: native
	I1013 21:53:47.390468  162695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33031 <nil> <nil>}
	I1013 21:53:47.390485  162695 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-257205 && echo "force-systemd-flag-257205" | sudo tee /etc/hostname
	I1013 21:53:47.546143  162695 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-257205
	
	I1013 21:53:47.546218  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:47.563846  162695 main.go:141] libmachine: Using SSH client type: native
	I1013 21:53:47.564155  162695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33031 <nil> <nil>}
	I1013 21:53:47.564173  162695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-257205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-257205/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-257205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:53:47.716047  162695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:53:47.716073  162695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 21:53:47.716093  162695 ubuntu.go:190] setting up certificates
	I1013 21:53:47.716101  162695 provision.go:84] configureAuth start
	I1013 21:53:47.716175  162695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257205
	I1013 21:53:47.733171  162695 provision.go:143] copyHostCerts
	I1013 21:53:47.733219  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:53:47.733253  162695 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 21:53:47.733265  162695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:53:47.733350  162695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 21:53:47.733432  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:53:47.733449  162695 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 21:53:47.733453  162695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:53:47.733479  162695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 21:53:47.733519  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:53:47.733541  162695 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 21:53:47.733547  162695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:53:47.733580  162695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 21:53:47.733631  162695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-257205 san=[127.0.0.1 192.168.76.2 force-systemd-flag-257205 localhost minikube]
	I1013 21:53:47.952226  162695 provision.go:177] copyRemoteCerts
	I1013 21:53:47.952299  162695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:53:47.952344  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:47.968509  162695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33031 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa Username:docker}
	I1013 21:53:48.080014  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1013 21:53:48.080097  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 21:53:48.101170  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1013 21:53:48.101229  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1013 21:53:48.119869  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1013 21:53:48.119931  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 21:53:48.145286  162695 provision.go:87] duration metric: took 429.170566ms to configureAuth
	I1013 21:53:48.145362  162695 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:53:48.145624  162695 config.go:182] Loaded profile config "force-systemd-flag-257205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:53:48.145761  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:48.163950  162695 main.go:141] libmachine: Using SSH client type: native
	I1013 21:53:48.164254  162695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33031 <nil> <nil>}
	I1013 21:53:48.164274  162695 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:53:48.424239  162695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:53:48.424263  162695 machine.go:96] duration metric: took 4.225042541s to provisionDockerMachine
	I1013 21:53:48.424282  162695 client.go:171] duration metric: took 11.029133795s to LocalClient.Create
	I1013 21:53:48.424297  162695 start.go:167] duration metric: took 11.029210076s to libmachine.API.Create "force-systemd-flag-257205"
	I1013 21:53:48.424310  162695 start.go:293] postStartSetup for "force-systemd-flag-257205" (driver="docker")
	I1013 21:53:48.424320  162695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:53:48.424387  162695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:53:48.424438  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:48.442039  162695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33031 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa Username:docker}
	I1013 21:53:48.544226  162695 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:53:48.547709  162695 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:53:48.547809  162695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:53:48.547839  162695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 21:53:48.547925  162695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 21:53:48.548078  162695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 21:53:48.548096  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> /etc/ssl/certs/42992.pem
	I1013 21:53:48.548195  162695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 21:53:48.556112  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:53:48.574232  162695 start.go:296] duration metric: took 149.908566ms for postStartSetup
	I1013 21:53:48.574611  162695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257205
	I1013 21:53:48.590900  162695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/config.json ...
	I1013 21:53:48.591168  162695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:53:48.591209  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:48.608842  162695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33031 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa Username:docker}
	I1013 21:53:48.704331  162695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:53:48.708695  162695 start.go:128] duration metric: took 11.318941383s to createHost
	I1013 21:53:48.708720  162695 start.go:83] releasing machines lock for "force-systemd-flag-257205", held for 11.319067885s
	I1013 21:53:48.708788  162695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-257205
	I1013 21:53:48.726055  162695 ssh_runner.go:195] Run: cat /version.json
	I1013 21:53:48.726117  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:48.726381  162695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:53:48.726440  162695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-257205
	I1013 21:53:48.750968  162695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33031 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa Username:docker}
	I1013 21:53:48.753819  162695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33031 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-flag-257205/id_rsa Username:docker}
	I1013 21:53:48.936421  162695 ssh_runner.go:195] Run: systemctl --version
	I1013 21:53:48.942817  162695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:53:48.980459  162695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:53:48.984869  162695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:53:48.984985  162695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:53:49.016097  162695 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 21:53:49.016166  162695 start.go:495] detecting cgroup driver to use...
	I1013 21:53:49.016982  162695 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1013 21:53:49.017098  162695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:53:49.037060  162695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:53:49.050278  162695 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:53:49.050339  162695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:53:49.069764  162695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:53:49.090265  162695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:53:49.213059  162695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:53:49.329682  162695 docker.go:234] disabling docker service ...
	I1013 21:53:49.329754  162695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:53:49.354006  162695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:53:49.367770  162695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:53:49.489314  162695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:53:49.601583  162695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:53:49.614907  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:53:49.628567  162695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:53:49.628675  162695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:53:49.637216  162695 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 21:53:49.637315  162695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:53:49.647357  162695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:53:49.657556  162695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:53:49.666610  162695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:53:49.674596  162695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:53:49.683485  162695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:53:49.698677  162695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:53:49.708157  162695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:53:49.715585  162695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:53:49.722988  162695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:53:49.832275  162695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:53:49.949692  162695 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:53:49.949816  162695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:53:49.954131  162695 start.go:563] Will wait 60s for crictl version
	I1013 21:53:49.954194  162695 ssh_runner.go:195] Run: which crictl
	I1013 21:53:49.959953  162695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:53:49.991002  162695 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:53:49.991163  162695 ssh_runner.go:195] Run: crio --version
	I1013 21:53:50.029936  162695 ssh_runner.go:195] Run: crio --version
	I1013 21:53:50.067445  162695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:53:50.070720  162695 cli_runner.go:164] Run: docker network inspect force-systemd-flag-257205 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:53:50.088421  162695 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 21:53:50.093305  162695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:53:50.105059  162695 kubeadm.go:883] updating cluster {Name:force-systemd-flag-257205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-257205 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:53:50.105251  162695 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:53:50.105314  162695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:53:50.144703  162695 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:53:50.144726  162695 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:53:50.144834  162695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:53:50.171290  162695 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:53:50.171322  162695 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:53:50.171331  162695 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 21:53:50.171437  162695 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-257205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-257205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:53:50.171526  162695 ssh_runner.go:195] Run: crio config
	I1013 21:53:50.229875  162695 cni.go:84] Creating CNI manager for ""
	I1013 21:53:50.229901  162695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:53:50.229914  162695 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:53:50.229937  162695 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-257205 NodeName:force-systemd-flag-257205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:53:50.230091  162695 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-257205"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:53:50.230180  162695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:53:50.238180  162695 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:53:50.238253  162695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:53:50.245960  162695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1013 21:53:50.260102  162695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:53:50.272996  162695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1013 21:53:50.286304  162695 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:53:50.289788  162695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:53:50.299619  162695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:53:50.411855  162695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:53:50.428533  162695 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205 for IP: 192.168.76.2
	I1013 21:53:50.428551  162695 certs.go:195] generating shared ca certs ...
	I1013 21:53:50.428567  162695 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:50.428712  162695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 21:53:50.428767  162695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 21:53:50.428776  162695 certs.go:257] generating profile certs ...
	I1013 21:53:50.428834  162695 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/client.key
	I1013 21:53:50.428846  162695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/client.crt with IP's: []
	I1013 21:53:51.127499  162695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/client.crt ...
	I1013 21:53:51.127532  162695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/client.crt: {Name:mk5bdf28e187d71640b946b5040431266b9aa3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:51.127746  162695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/client.key ...
	I1013 21:53:51.127764  162695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/client.key: {Name:mkd5ed10e68ddd244c99ad7349b2f3c7fb006aa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:51.127904  162695 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.key.a5080875
	I1013 21:53:51.127927  162695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.crt.a5080875 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 21:53:52.510740  162695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.crt.a5080875 ...
	I1013 21:53:52.510814  162695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.crt.a5080875: {Name:mked3bed3f8d3667e26b8de39a14151fc05b8dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:52.511022  162695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.key.a5080875 ...
	I1013 21:53:52.511059  162695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.key.a5080875: {Name:mkf20aa5172111032156d3dfe0be07ad769ea46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:52.511176  162695 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.crt.a5080875 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.crt
	I1013 21:53:52.511292  162695 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.key.a5080875 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.key
	I1013 21:53:52.511403  162695 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.key
	I1013 21:53:52.511451  162695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.crt with IP's: []
	I1013 21:53:53.029402  162695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.crt ...
	I1013 21:53:53.029433  162695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.crt: {Name:mk1b6efc582f763ec872dc5058279fd52a694a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:53.029617  162695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.key ...
	I1013 21:53:53.029632  162695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.key: {Name:mkeb0cc439a720eccf9a14281d4f6d9241e8ace1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:53:53.029730  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1013 21:53:53.029752  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1013 21:53:53.029772  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1013 21:53:53.029789  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1013 21:53:53.029811  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1013 21:53:53.029829  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1013 21:53:53.029840  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1013 21:53:53.029855  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1013 21:53:53.029902  162695 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 21:53:53.029941  162695 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 21:53:53.029952  162695 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 21:53:53.029987  162695 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 21:53:53.030015  162695 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:53:53.030040  162695 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 21:53:53.030088  162695 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:53:53.030118  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> /usr/share/ca-certificates/42992.pem
	I1013 21:53:53.030134  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:53:53.030144  162695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem -> /usr/share/ca-certificates/4299.pem
	I1013 21:53:53.030762  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:53:53.049596  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 21:53:53.067017  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:53:53.085797  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 21:53:53.103016  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1013 21:53:53.120408  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:53:53.141843  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:53:53.161595  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-flag-257205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 21:53:53.179720  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 21:53:53.196397  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:53:53.213155  162695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 21:53:53.230477  162695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:53:53.242381  162695 ssh_runner.go:195] Run: openssl version
	I1013 21:53:53.248753  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 21:53:53.256854  162695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 21:53:53.261739  162695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 21:53:53.261836  162695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 21:53:53.304140  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:53:53.312499  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:53:53.320472  162695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:53:53.324295  162695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:53:53.324402  162695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:53:53.366400  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:53:53.374600  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 21:53:53.382505  162695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 21:53:53.386101  162695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 21:53:53.386159  162695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 21:53:53.428073  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 21:53:53.437024  162695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:53:53.441831  162695 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:53:53.441928  162695 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-257205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-257205 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:53:53.442025  162695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:53:53.442146  162695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:53:53.472265  162695 cri.go:89] found id: ""
	I1013 21:53:53.472409  162695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:53:53.483347  162695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:53:53.491380  162695 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:53:53.491507  162695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:53:53.501809  162695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:53:53.501877  162695 kubeadm.go:157] found existing configuration files:
	
	I1013 21:53:53.501951  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:53:53.510607  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:53:53.510716  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:53:53.522958  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:53:53.533337  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:53:53.533449  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:53:53.542647  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:53:53.550265  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:53:53.550358  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:53:53.557712  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:53:53.565643  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:53:53.565718  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:53:53.573813  162695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:53:53.653610  162695 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 21:53:53.653981  162695 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 21:53:53.729558  162695 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 21:58:04.820103  162695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1013 21:58:04.820206  162695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1013 21:58:04.824105  162695 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:58:04.824178  162695 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:58:04.824281  162695 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:58:04.824356  162695 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:58:04.824398  162695 kubeadm.go:318] OS: Linux
	I1013 21:58:04.824450  162695 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:58:04.824503  162695 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:58:04.824556  162695 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:58:04.824610  162695 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:58:04.824663  162695 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:58:04.824719  162695 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:58:04.824769  162695 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:58:04.824842  162695 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:58:04.824911  162695 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:58:04.824989  162695 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:58:04.825102  162695 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:58:04.825204  162695 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:58:04.825277  162695 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:58:04.829922  162695 out.go:252]   - Generating certificates and keys ...
	I1013 21:58:04.830035  162695 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:58:04.830112  162695 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:58:04.830187  162695 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 21:58:04.830249  162695 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 21:58:04.830316  162695 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 21:58:04.830375  162695 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 21:58:04.830439  162695 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 21:58:04.830595  162695 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 21:58:04.830681  162695 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 21:58:04.830833  162695 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 21:58:04.830920  162695 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 21:58:04.831001  162695 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 21:58:04.831061  162695 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 21:58:04.831138  162695 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:58:04.831209  162695 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:58:04.831277  162695 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:58:04.831340  162695 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:58:04.831414  162695 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:58:04.831475  162695 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:58:04.831572  162695 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:58:04.831659  162695 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:58:04.834406  162695 out.go:252]   - Booting up control plane ...
	I1013 21:58:04.834516  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:58:04.834605  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:58:04.834694  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:58:04.834848  162695 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:58:04.834982  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:58:04.835127  162695 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:58:04.835247  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:58:04.835304  162695 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:58:04.835459  162695 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:58:04.835594  162695 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:58:04.835666  162695 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001253415s
	I1013 21:58:04.835766  162695 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:58:04.835880  162695 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 21:58:04.835990  162695 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:58:04.836104  162695 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:58:04.836190  162695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000115146s
	I1013 21:58:04.836291  162695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000010837s
	I1013 21:58:04.836397  162695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00044952s
	I1013 21:58:04.836414  162695 kubeadm.go:318] 
	I1013 21:58:04.836520  162695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 21:58:04.836649  162695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 21:58:04.836775  162695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 21:58:04.836878  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 21:58:04.836963  162695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 21:58:04.837049  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 21:58:04.837056  162695 kubeadm.go:318] 
	W1013 21:58:04.837181  162695 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001253415s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000115146s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000010837s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00044952s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001253415s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000115146s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000010837s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00044952s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1013 21:58:04.837270  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1013 21:58:05.369438  162695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:58:05.382567  162695 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:58:05.382630  162695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:58:05.390228  162695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:58:05.390250  162695 kubeadm.go:157] found existing configuration files:
	
	I1013 21:58:05.390299  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:58:05.397544  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:58:05.397604  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:58:05.404918  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:58:05.412589  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:58:05.412654  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:58:05.419705  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:58:05.426893  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:58:05.427002  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:58:05.434468  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:58:05.442062  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:58:05.442124  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:58:05.449405  162695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:58:05.490649  162695 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:58:05.490710  162695 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:58:05.513943  162695 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:58:05.514019  162695 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:58:05.514062  162695 kubeadm.go:318] OS: Linux
	I1013 21:58:05.514114  162695 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:58:05.514169  162695 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:58:05.514236  162695 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:58:05.514292  162695 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:58:05.514347  162695 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:58:05.514404  162695 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:58:05.514455  162695 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:58:05.514510  162695 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:58:05.514562  162695 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:58:05.585933  162695 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:58:05.586050  162695 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:58:05.586148  162695 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:58:05.600197  162695 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:58:05.607116  162695 out.go:252]   - Generating certificates and keys ...
	I1013 21:58:05.607240  162695 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:58:05.607322  162695 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:58:05.607415  162695 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1013 21:58:05.607501  162695 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1013 21:58:05.607591  162695 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1013 21:58:05.607656  162695 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1013 21:58:05.607732  162695 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1013 21:58:05.607836  162695 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1013 21:58:05.607926  162695 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1013 21:58:05.608012  162695 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1013 21:58:05.608058  162695 kubeadm.go:318] [certs] Using the existing "sa" key
	I1013 21:58:05.608127  162695 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:58:06.121469  162695 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:58:06.821330  162695 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:58:07.699068  162695 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:58:08.245141  162695 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:58:09.060070  162695 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:58:09.060627  162695 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:58:09.063126  162695 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:58:09.066527  162695 out.go:252]   - Booting up control plane ...
	I1013 21:58:09.066628  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:58:09.066718  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:58:09.066793  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:58:09.081225  162695 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:58:09.081568  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:58:09.089218  162695 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:58:09.089606  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:58:09.089841  162695 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:58:09.230473  162695 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:58:09.230600  162695 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:58:10.231865  162695 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001420521s
	I1013 21:58:10.235771  162695 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:58:10.235897  162695 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 21:58:10.235995  162695 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:58:10.236082  162695 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:02:10.236756  162695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	I1013 22:02:10.237955  162695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	I1013 22:02:10.238057  162695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	I1013 22:02:10.238071  162695 kubeadm.go:318] 
	I1013 22:02:10.238167  162695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 22:02:10.238256  162695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 22:02:10.238350  162695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 22:02:10.238452  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 22:02:10.238531  162695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 22:02:10.238644  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 22:02:10.238654  162695 kubeadm.go:318] 
	I1013 22:02:10.243041  162695 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:02:10.243279  162695 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:02:10.243394  162695 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:02:10.244013  162695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1013 22:02:10.244089  162695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1013 22:02:10.244142  162695 kubeadm.go:402] duration metric: took 8m16.80221698s to StartCluster
	I1013 22:02:10.244178  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:10.244238  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:10.268351  162695 cri.go:89] found id: ""
	I1013 22:02:10.268383  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.268391  162695 logs.go:284] No container was found matching "kube-apiserver"
	I1013 22:02:10.268399  162695 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:10.268452  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:10.291966  162695 cri.go:89] found id: ""
	I1013 22:02:10.291989  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.291997  162695 logs.go:284] No container was found matching "etcd"
	I1013 22:02:10.292004  162695 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:10.292062  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:10.316217  162695 cri.go:89] found id: ""
	I1013 22:02:10.316241  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.316258  162695 logs.go:284] No container was found matching "coredns"
	I1013 22:02:10.316264  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:10.316337  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:10.340887  162695 cri.go:89] found id: ""
	I1013 22:02:10.340910  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.340919  162695 logs.go:284] No container was found matching "kube-scheduler"
	I1013 22:02:10.340925  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:10.340979  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:10.364292  162695 cri.go:89] found id: ""
	I1013 22:02:10.364314  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.364322  162695 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:10.364328  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:10.364386  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:10.389001  162695 cri.go:89] found id: ""
	I1013 22:02:10.389073  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.389085  162695 logs.go:284] No container was found matching "kube-controller-manager"
	I1013 22:02:10.389101  162695 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:10.389163  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:10.420029  162695 cri.go:89] found id: ""
	I1013 22:02:10.420054  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.420062  162695 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:10.420072  162695 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:10.420083  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:10.486215  162695 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1013 22:02:10.477965    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.479122    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480202    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480857    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.482280    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1013 22:02:10.477965    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.479122    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480202    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480857    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.482280    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:10.486247  162695 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:10.486274  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:10.560538  162695 logs.go:123] Gathering logs for container status ...
	I1013 22:02:10.560573  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:10.587995  162695 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:10.588022  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:10.675720  162695 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:10.675758  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1013 22:02:10.690101  162695 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1013 22:02:10.690156  162695 out.go:285] * 
	* 
	W1013 22:02:10.690336  162695 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:02:10.690364  162695 out.go:285] * 
	* 
	W1013 22:02:10.692775  162695 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:02:10.700237  162695 out.go:203] 
	W1013 22:02:10.703128  162695 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:02:10.703164  162695 out.go:285] * 
	* 
	I1013 22:02:10.706146  162695 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-257205 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-257205 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-13 22:02:11.065370646 +0000 UTC m=+3855.900199824
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-257205
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-257205:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aaddd2f58ff481b9bea238ca17fb1647334a946ebe862bf03f15714342c87d7e",
	        "Created": "2025-10-13T21:53:42.99063421Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163335,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:53:43.068311796Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/aaddd2f58ff481b9bea238ca17fb1647334a946ebe862bf03f15714342c87d7e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aaddd2f58ff481b9bea238ca17fb1647334a946ebe862bf03f15714342c87d7e/hostname",
	        "HostsPath": "/var/lib/docker/containers/aaddd2f58ff481b9bea238ca17fb1647334a946ebe862bf03f15714342c87d7e/hosts",
	        "LogPath": "/var/lib/docker/containers/aaddd2f58ff481b9bea238ca17fb1647334a946ebe862bf03f15714342c87d7e/aaddd2f58ff481b9bea238ca17fb1647334a946ebe862bf03f15714342c87d7e-json.log",
	        "Name": "/force-systemd-flag-257205",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-257205:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-257205",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aaddd2f58ff481b9bea238ca17fb1647334a946ebe862bf03f15714342c87d7e",
	                "LowerDir": "/var/lib/docker/overlay2/c6fad3d5bef0bce15c6988d5a66ace4070d48455143e8d6894ba5a36fd4055c2-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6fad3d5bef0bce15c6988d5a66ace4070d48455143e8d6894ba5a36fd4055c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6fad3d5bef0bce15c6988d5a66ace4070d48455143e8d6894ba5a36fd4055c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6fad3d5bef0bce15c6988d5a66ace4070d48455143e8d6894ba5a36fd4055c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-257205",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-257205/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-257205",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-257205",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-257205",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d2f7683fade5fa6532b9aa555127a2532e8ae07f8e67d8d078227ef82d16b75",
	            "SandboxKey": "/var/run/docker/netns/5d2f7683fade",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33035"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-257205": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:06:77:0f:f3:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fba820387dada3b266e31d7878ab1eb4cf5568a350e5818d1f5d343436a1fd94",
	                    "EndpointID": "22858c2ce8adb2011a5e7d8aef1d56551ca99b4e1cfc3aa78ead157a426ab832",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-257205",
	                        "aaddd2f58ff4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-257205 -n force-systemd-flag-257205
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-257205 -n force-systemd-flag-257205: exit status 6 (318.359191ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 22:02:11.385579  172409 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-257205" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-257205 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-122822 sudo systemctl cat kubelet --no-pager                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status docker --all --full --no-pager                                      │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat docker --no-pager                                                      │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/docker/daemon.json                                                          │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo docker system info                                                                   │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cri-dockerd --version                                                                │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat containerd --no-pager                                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/containerd/config.toml                                                      │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo containerd config dump                                                               │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status crio --all --full --no-pager                                        │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat crio --no-pager                                                        │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo crio config                                                                          │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ delete  │ -p cilium-122822                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │ 13 Oct 25 21:55 UTC │
	│ start   │ -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ force-systemd-flag-257205 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:55:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:55:10.209393  168487 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:55:10.209555  168487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:55:10.209585  168487 out.go:374] Setting ErrFile to fd 2...
	I1013 21:55:10.209604  168487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:55:10.209890  168487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:55:10.210312  168487 out.go:368] Setting JSON to false
	I1013 21:55:10.211160  168487 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5845,"bootTime":1760386666,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:55:10.211255  168487 start.go:141] virtualization:  
	I1013 21:55:10.214693  168487 out.go:179] * [force-systemd-env-312094] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:55:10.217956  168487 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:55:10.218026  168487 notify.go:220] Checking for updates...
	I1013 21:55:10.224770  168487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:55:10.227746  168487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:55:10.230601  168487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:55:10.233390  168487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:55:10.236318  168487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1013 21:55:10.239738  168487 config.go:182] Loaded profile config "force-systemd-flag-257205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:55:10.239876  168487 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:55:10.264210  168487 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:55:10.264321  168487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:55:10.327581  168487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:55:10.317553311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:55:10.328647  168487 docker.go:318] overlay module found
	I1013 21:55:10.331690  168487 out.go:179] * Using the docker driver based on user configuration
	I1013 21:55:10.334560  168487 start.go:305] selected driver: docker
	I1013 21:55:10.334578  168487 start.go:925] validating driver "docker" against <nil>
	I1013 21:55:10.334593  168487 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:55:10.335334  168487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:55:10.385291  168487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:55:10.376888092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:55:10.385450  168487 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:55:10.385671  168487 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 21:55:10.388604  168487 out.go:179] * Using Docker driver with root privileges
	I1013 21:55:10.391406  168487 cni.go:84] Creating CNI manager for ""
	I1013 21:55:10.391470  168487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:55:10.391483  168487 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 21:55:10.391557  168487 start.go:349] cluster config:
	{Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:55:10.394684  168487 out.go:179] * Starting "force-systemd-env-312094" primary control-plane node in "force-systemd-env-312094" cluster
	I1013 21:55:10.397458  168487 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:55:10.400437  168487 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 21:55:10.403173  168487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:55:10.403222  168487 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 21:55:10.403235  168487 cache.go:58] Caching tarball of preloaded images
	I1013 21:55:10.403263  168487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 21:55:10.403312  168487 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 21:55:10.403321  168487 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:55:10.403442  168487 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/config.json ...
	I1013 21:55:10.403460  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/config.json: {Name:mk12f3078fd64eaff5310ce92c9a156a90779f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:10.421994  168487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 21:55:10.422017  168487 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 21:55:10.422041  168487 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:55:10.422062  168487 start.go:360] acquireMachinesLock for force-systemd-env-312094: {Name:mke65a331adc28a1288932bab33b66b5316bb30f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:55:10.422183  168487 start.go:364] duration metric: took 95.365µs to acquireMachinesLock for "force-systemd-env-312094"
	I1013 21:55:10.422210  168487 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:55:10.422274  168487 start.go:125] createHost starting for "" (driver="docker")
	I1013 21:55:10.425556  168487 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 21:55:10.425768  168487 start.go:159] libmachine.API.Create for "force-systemd-env-312094" (driver="docker")
	I1013 21:55:10.425812  168487 client.go:168] LocalClient.Create starting
	I1013 21:55:10.425877  168487 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 21:55:10.425913  168487 main.go:141] libmachine: Decoding PEM data...
	I1013 21:55:10.425930  168487 main.go:141] libmachine: Parsing certificate...
	I1013 21:55:10.425982  168487 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 21:55:10.426003  168487 main.go:141] libmachine: Decoding PEM data...
	I1013 21:55:10.426021  168487 main.go:141] libmachine: Parsing certificate...
	I1013 21:55:10.426376  168487 cli_runner.go:164] Run: docker network inspect force-systemd-env-312094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 21:55:10.442347  168487 cli_runner.go:211] docker network inspect force-systemd-env-312094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 21:55:10.442437  168487 network_create.go:284] running [docker network inspect force-systemd-env-312094] to gather additional debugging logs...
	I1013 21:55:10.442459  168487 cli_runner.go:164] Run: docker network inspect force-systemd-env-312094
	W1013 21:55:10.458174  168487 cli_runner.go:211] docker network inspect force-systemd-env-312094 returned with exit code 1
	I1013 21:55:10.458206  168487 network_create.go:287] error running [docker network inspect force-systemd-env-312094]: docker network inspect force-systemd-env-312094: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-312094 not found
	I1013 21:55:10.458220  168487 network_create.go:289] output of [docker network inspect force-systemd-env-312094]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-312094 not found
	
	** /stderr **
	I1013 21:55:10.458322  168487 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:55:10.474220  168487 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 21:55:10.474548  168487 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 21:55:10.474888  168487 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 21:55:10.475091  168487 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fba820387dad IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:f6:b2:86:7d:91} reservation:<nil>}
	I1013 21:55:10.475504  168487 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9960}
	I1013 21:55:10.475533  168487 network_create.go:124] attempt to create docker network force-systemd-env-312094 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 21:55:10.475599  168487 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-312094 force-systemd-env-312094
	I1013 21:55:10.533854  168487 network_create.go:108] docker network force-systemd-env-312094 192.168.85.0/24 created
	I1013 21:55:10.533888  168487 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-312094" container
	I1013 21:55:10.533964  168487 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 21:55:10.550637  168487 cli_runner.go:164] Run: docker volume create force-systemd-env-312094 --label name.minikube.sigs.k8s.io=force-systemd-env-312094 --label created_by.minikube.sigs.k8s.io=true
	I1013 21:55:10.568548  168487 oci.go:103] Successfully created a docker volume force-systemd-env-312094
	I1013 21:55:10.568640  168487 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-312094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-312094 --entrypoint /usr/bin/test -v force-systemd-env-312094:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 21:55:11.080567  168487 oci.go:107] Successfully prepared a docker volume force-systemd-env-312094
	I1013 21:55:11.080623  168487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:55:11.080643  168487 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 21:55:11.080711  168487 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-312094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 21:55:15.510059  168487 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-312094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.429305399s)
	I1013 21:55:15.510106  168487 kic.go:203] duration metric: took 4.429452071s to extract preloaded images to volume ...
	W1013 21:55:15.510255  168487 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 21:55:15.510403  168487 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 21:55:15.563811  168487 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-312094 --name force-systemd-env-312094 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-312094 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-312094 --network force-systemd-env-312094 --ip 192.168.85.2 --volume force-systemd-env-312094:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 21:55:15.855559  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Running}}
	I1013 21:55:15.877170  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Status}}
	I1013 21:55:15.899839  168487 cli_runner.go:164] Run: docker exec force-systemd-env-312094 stat /var/lib/dpkg/alternatives/iptables
	I1013 21:55:15.954596  168487 oci.go:144] the created container "force-systemd-env-312094" has a running status.
	I1013 21:55:15.954639  168487 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa...
	I1013 21:55:17.334455  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1013 21:55:17.334506  168487 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 21:55:17.373427  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Status}}
	I1013 21:55:17.396068  168487 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 21:55:17.396090  168487 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-312094 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 21:55:17.446457  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Status}}
	I1013 21:55:17.466451  168487 machine.go:93] provisionDockerMachine start ...
	I1013 21:55:17.466548  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:17.493366  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:17.493714  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:17.493731  168487 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:55:17.707388  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-312094
	
	I1013 21:55:17.707452  168487 ubuntu.go:182] provisioning hostname "force-systemd-env-312094"
	I1013 21:55:17.707547  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:17.730811  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:17.731107  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:17.731124  168487 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-312094 && echo "force-systemd-env-312094" | sudo tee /etc/hostname
	I1013 21:55:17.895908  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-312094
	
	I1013 21:55:17.895983  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:17.917366  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:17.917679  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:17.917704  168487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-312094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-312094/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-312094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:55:18.072426  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:55:18.072454  168487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 21:55:18.072479  168487 ubuntu.go:190] setting up certificates
	I1013 21:55:18.072494  168487 provision.go:84] configureAuth start
	I1013 21:55:18.072555  168487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-312094
	I1013 21:55:18.089826  168487 provision.go:143] copyHostCerts
	I1013 21:55:18.089877  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:55:18.089913  168487 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 21:55:18.089924  168487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:55:18.090000  168487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 21:55:18.090084  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:55:18.090105  168487 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 21:55:18.090110  168487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:55:18.090140  168487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 21:55:18.090193  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:55:18.090216  168487 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 21:55:18.090224  168487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:55:18.090249  168487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 21:55:18.090300  168487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-312094 san=[127.0.0.1 192.168.85.2 force-systemd-env-312094 localhost minikube]
	I1013 21:55:18.535444  168487 provision.go:177] copyRemoteCerts
	I1013 21:55:18.535507  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:55:18.535548  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:18.553806  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:18.656574  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1013 21:55:18.656644  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 21:55:18.680171  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1013 21:55:18.680237  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1013 21:55:18.701252  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1013 21:55:18.701316  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 21:55:18.719004  168487 provision.go:87] duration metric: took 646.495422ms to configureAuth
	I1013 21:55:18.719028  168487 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:55:18.719206  168487 config.go:182] Loaded profile config "force-systemd-env-312094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:55:18.719301  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:18.735675  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:18.736019  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:18.736041  168487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:55:18.990175  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:55:18.990200  168487 machine.go:96] duration metric: took 1.52372643s to provisionDockerMachine
	I1013 21:55:18.990211  168487 client.go:171] duration metric: took 8.564390458s to LocalClient.Create
	I1013 21:55:18.990225  168487 start.go:167] duration metric: took 8.564457779s to libmachine.API.Create "force-systemd-env-312094"
	I1013 21:55:18.990243  168487 start.go:293] postStartSetup for "force-systemd-env-312094" (driver="docker")
	I1013 21:55:18.990258  168487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:55:18.990332  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:55:18.990378  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.008768  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.111223  168487 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:55:19.114106  168487 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:55:19.114134  168487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:55:19.114145  168487 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 21:55:19.114197  168487 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 21:55:19.114280  168487 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 21:55:19.114287  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> /etc/ssl/certs/42992.pem
	I1013 21:55:19.114393  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 21:55:19.121274  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:55:19.137834  168487 start.go:296] duration metric: took 147.572273ms for postStartSetup
	I1013 21:55:19.138172  168487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-312094
	I1013 21:55:19.154438  168487 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/config.json ...
	I1013 21:55:19.154711  168487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:55:19.154759  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.171889  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.268339  168487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:55:19.272674  168487 start.go:128] duration metric: took 8.850384307s to createHost
	I1013 21:55:19.272695  168487 start.go:83] releasing machines lock for "force-systemd-env-312094", held for 8.850500693s
	I1013 21:55:19.272761  168487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-312094
	I1013 21:55:19.288112  168487 ssh_runner.go:195] Run: cat /version.json
	I1013 21:55:19.288172  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.288414  168487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:55:19.288474  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.304879  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.321736  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.411354  168487 ssh_runner.go:195] Run: systemctl --version
	I1013 21:55:19.500125  168487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:55:19.537162  168487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:55:19.541424  168487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:55:19.541494  168487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:55:19.569736  168487 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 21:55:19.569760  168487 start.go:495] detecting cgroup driver to use...
	I1013 21:55:19.569777  168487 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1013 21:55:19.569827  168487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:55:19.586433  168487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:55:19.598863  168487 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:55:19.598920  168487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:55:19.616249  168487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:55:19.634437  168487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:55:19.750596  168487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:55:19.880610  168487 docker.go:234] disabling docker service ...
	I1013 21:55:19.880687  168487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:55:19.901476  168487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:55:19.914595  168487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:55:20.036891  168487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:55:20.167553  168487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:55:20.181171  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:55:20.195537  168487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:55:20.195682  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.205227  168487 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 21:55:20.205297  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.215289  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.224608  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.233077  168487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:55:20.242062  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.250878  168487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.266141  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.274870  168487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:55:20.282244  168487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:55:20.289533  168487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:55:20.412055  168487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:55:20.540958  168487 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:55:20.541022  168487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:55:20.544647  168487 start.go:563] Will wait 60s for crictl version
	I1013 21:55:20.544704  168487 ssh_runner.go:195] Run: which crictl
	I1013 21:55:20.548316  168487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:55:20.576786  168487 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:55:20.576865  168487 ssh_runner.go:195] Run: crio --version
	I1013 21:55:20.607042  168487 ssh_runner.go:195] Run: crio --version
	I1013 21:55:20.639889  168487 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:55:20.642688  168487 cli_runner.go:164] Run: docker network inspect force-systemd-env-312094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:55:20.658309  168487 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 21:55:20.661976  168487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:55:20.671392  168487 kubeadm.go:883] updating cluster {Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:55:20.671519  168487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:55:20.671578  168487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:55:20.702794  168487 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:55:20.702820  168487 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:55:20.702874  168487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:55:20.727157  168487 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:55:20.727185  168487 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:55:20.727200  168487 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 21:55:20.727289  168487 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-312094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:55:20.727370  168487 ssh_runner.go:195] Run: crio config
	I1013 21:55:20.787960  168487 cni.go:84] Creating CNI manager for ""
	I1013 21:55:20.787984  168487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:55:20.788005  168487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:55:20.788051  168487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-312094 NodeName:force-systemd-env-312094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:55:20.788195  168487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-312094"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:55:20.788277  168487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:55:20.795978  168487 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:55:20.796046  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:55:20.803605  168487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1013 21:55:20.816663  168487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:55:20.830640  168487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1013 21:55:20.844008  168487 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:55:20.847684  168487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:55:20.857948  168487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:55:20.965290  168487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:55:20.984599  168487 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094 for IP: 192.168.85.2
	I1013 21:55:20.984617  168487 certs.go:195] generating shared ca certs ...
	I1013 21:55:20.984633  168487 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:20.984766  168487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 21:55:20.984814  168487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 21:55:20.984825  168487 certs.go:257] generating profile certs ...
	I1013 21:55:20.984876  168487 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.key
	I1013 21:55:20.984900  168487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.crt with IP's: []
	I1013 21:55:21.665898  168487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.crt ...
	I1013 21:55:21.665931  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.crt: {Name:mkb645617bc0016a6d27b316d0713d42f424694d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:21.666132  168487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.key ...
	I1013 21:55:21.666149  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.key: {Name:mka4844cb4446e07dcd41b696998746d1340a9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:21.666242  168487 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806
	I1013 21:55:21.666260  168487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 21:55:22.240537  168487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806 ...
	I1013 21:55:22.240571  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806: {Name:mk14f1f35b6766a12e05b4367d669736a1a562f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.240759  168487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806 ...
	I1013 21:55:22.240774  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806: {Name:mkc0d8a71e3811cfd845189e686f430ba8cf2306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.240865  168487 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt
	I1013 21:55:22.240948  168487 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key
	I1013 21:55:22.241011  168487 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key
	I1013 21:55:22.241031  168487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt with IP's: []
	I1013 21:55:22.573226  168487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt ...
	I1013 21:55:22.573257  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt: {Name:mk2d4a7cdac452f8079cc4adb92ad7a1987056ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.573437  168487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key ...
	I1013 21:55:22.573451  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key: {Name:mkb7eaa24982b101bec8ec39be6dad93e7dd60f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.573531  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1013 21:55:22.573555  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1013 21:55:22.573568  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1013 21:55:22.573585  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1013 21:55:22.573605  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1013 21:55:22.573621  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1013 21:55:22.573635  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1013 21:55:22.573650  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1013 21:55:22.573706  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 21:55:22.573744  168487 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 21:55:22.573756  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 21:55:22.573789  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 21:55:22.573816  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:55:22.573841  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 21:55:22.573886  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:55:22.573919  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.573934  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem -> /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.573945  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.574567  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:55:22.595405  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 21:55:22.614271  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:55:22.632885  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 21:55:22.652974  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1013 21:55:22.671598  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 21:55:22.688985  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:55:22.706468  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 21:55:22.723108  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:55:22.740830  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 21:55:22.758417  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 21:55:22.775402  168487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:55:22.788312  168487 ssh_runner.go:195] Run: openssl version
	I1013 21:55:22.794343  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:55:22.802875  168487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.806446  168487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.806547  168487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.847299  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:55:22.855430  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 21:55:22.864056  168487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.867501  168487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.867560  168487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.908801  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 21:55:22.917162  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 21:55:22.925356  168487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.929014  168487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.929085  168487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.970421  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:55:22.978486  168487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:55:22.981968  168487 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:55:22.982021  168487 kubeadm.go:400] StartCluster: {Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:55:22.982093  168487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:55:22.982156  168487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:55:23.009697  168487 cri.go:89] found id: ""
	I1013 21:55:23.009822  168487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:55:23.017771  168487 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:55:23.025340  168487 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:55:23.025441  168487 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:55:23.032780  168487 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:55:23.032801  168487 kubeadm.go:157] found existing configuration files:
	
	I1013 21:55:23.032851  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:55:23.040270  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:55:23.040335  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:55:23.047355  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:55:23.054596  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:55:23.054655  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:55:23.061807  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:55:23.069010  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:55:23.069081  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:55:23.076102  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:55:23.083321  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:55:23.083400  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:55:23.090670  168487 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:55:23.132512  168487 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:55:23.132939  168487 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:55:23.174136  168487 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:55:23.174223  168487 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:55:23.174265  168487 kubeadm.go:318] OS: Linux
	I1013 21:55:23.174320  168487 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:55:23.174379  168487 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:55:23.174436  168487 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:55:23.174494  168487 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:55:23.174554  168487 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:55:23.174609  168487 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:55:23.174663  168487 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:55:23.174728  168487 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:55:23.174784  168487 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:55:23.249030  168487 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:55:23.249165  168487 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:55:23.249277  168487 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:55:23.258872  168487 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:55:23.265396  168487 out.go:252]   - Generating certificates and keys ...
	I1013 21:55:23.265509  168487 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:55:23.265582  168487 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:55:24.478718  168487 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 21:55:25.020141  168487 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 21:55:25.948085  168487 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 21:55:26.319019  168487 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 21:55:26.604991  168487 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 21:55:26.605338  168487 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 21:55:26.969810  168487 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 21:55:26.970083  168487 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 21:55:27.410716  168487 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 21:55:28.474769  168487 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 21:55:29.088807  168487 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 21:55:29.088887  168487 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:55:29.538878  168487 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:55:29.892715  168487 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:55:31.051218  168487 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:55:31.998830  168487 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:55:32.588068  168487 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:55:32.589064  168487 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:55:32.593202  168487 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:55:32.596965  168487 out.go:252]   - Booting up control plane ...
	I1013 21:55:32.597071  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:55:32.597148  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:55:32.597214  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:55:32.611996  168487 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:55:32.612117  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:55:32.619857  168487 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:55:32.619968  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:55:32.620013  168487 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:55:32.746484  168487 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:55:32.746614  168487 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:55:33.747865  168487 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001671261s
	I1013 21:55:33.751195  168487 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:55:33.751490  168487 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 21:55:33.751585  168487 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:55:33.751894  168487 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:58:04.820103  162695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1013 21:58:04.820206  162695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1013 21:58:04.824105  162695 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:58:04.824178  162695 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:58:04.824281  162695 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:58:04.824356  162695 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:58:04.824398  162695 kubeadm.go:318] OS: Linux
	I1013 21:58:04.824450  162695 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:58:04.824503  162695 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:58:04.824556  162695 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:58:04.824610  162695 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:58:04.824663  162695 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:58:04.824719  162695 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:58:04.824769  162695 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:58:04.824842  162695 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:58:04.824911  162695 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:58:04.824989  162695 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:58:04.825102  162695 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:58:04.825204  162695 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:58:04.825277  162695 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:58:04.829922  162695 out.go:252]   - Generating certificates and keys ...
	I1013 21:58:04.830035  162695 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:58:04.830112  162695 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:58:04.830187  162695 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 21:58:04.830249  162695 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 21:58:04.830316  162695 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 21:58:04.830375  162695 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 21:58:04.830439  162695 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 21:58:04.830595  162695 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 21:58:04.830681  162695 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 21:58:04.830833  162695 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 21:58:04.830920  162695 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 21:58:04.831001  162695 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 21:58:04.831061  162695 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 21:58:04.831138  162695 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:58:04.831209  162695 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:58:04.831277  162695 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:58:04.831340  162695 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:58:04.831414  162695 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:58:04.831475  162695 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:58:04.831572  162695 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:58:04.831659  162695 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:58:04.834406  162695 out.go:252]   - Booting up control plane ...
	I1013 21:58:04.834516  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:58:04.834605  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:58:04.834694  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:58:04.834848  162695 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:58:04.834982  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:58:04.835127  162695 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:58:04.835247  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:58:04.835304  162695 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:58:04.835459  162695 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:58:04.835594  162695 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:58:04.835666  162695 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001253415s
	I1013 21:58:04.835766  162695 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:58:04.835880  162695 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 21:58:04.835990  162695 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:58:04.836104  162695 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:58:04.836190  162695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000115146s
	I1013 21:58:04.836291  162695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000010837s
	I1013 21:58:04.836397  162695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00044952s
	I1013 21:58:04.836414  162695 kubeadm.go:318] 
	I1013 21:58:04.836520  162695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 21:58:04.836649  162695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 21:58:04.836775  162695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 21:58:04.836878  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 21:58:04.836963  162695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 21:58:04.837049  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 21:58:04.837056  162695 kubeadm.go:318] 
	W1013 21:58:04.837181  162695 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-257205 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001253415s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000115146s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000010837s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00044952s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1013 21:58:04.837270  162695 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1013 21:58:05.369438  162695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:58:05.382567  162695 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:58:05.382630  162695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:58:05.390228  162695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:58:05.390250  162695 kubeadm.go:157] found existing configuration files:
	
	I1013 21:58:05.390299  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:58:05.397544  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:58:05.397604  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:58:05.404918  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:58:05.412589  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:58:05.412654  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:58:05.419705  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:58:05.426893  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:58:05.427002  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:58:05.434468  162695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:58:05.442062  162695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:58:05.442124  162695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:58:05.449405  162695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:58:05.490649  162695 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:58:05.490710  162695 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:58:05.513943  162695 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:58:05.514019  162695 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:58:05.514062  162695 kubeadm.go:318] OS: Linux
	I1013 21:58:05.514114  162695 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:58:05.514169  162695 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:58:05.514236  162695 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:58:05.514292  162695 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:58:05.514347  162695 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:58:05.514404  162695 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:58:05.514455  162695 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:58:05.514510  162695 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:58:05.514562  162695 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:58:05.585933  162695 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:58:05.586050  162695 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:58:05.586148  162695 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:58:05.600197  162695 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:58:05.607116  162695 out.go:252]   - Generating certificates and keys ...
	I1013 21:58:05.607240  162695 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:58:05.607322  162695 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:58:05.607415  162695 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1013 21:58:05.607501  162695 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1013 21:58:05.607591  162695 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1013 21:58:05.607656  162695 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1013 21:58:05.607732  162695 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1013 21:58:05.607836  162695 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1013 21:58:05.607926  162695 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1013 21:58:05.608012  162695 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1013 21:58:05.608058  162695 kubeadm.go:318] [certs] Using the existing "sa" key
	I1013 21:58:05.608127  162695 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:58:06.121469  162695 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:58:06.821330  162695 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:58:07.699068  162695 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:58:08.245141  162695 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:58:09.060070  162695 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:58:09.060627  162695 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:58:09.063126  162695 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:58:09.066527  162695 out.go:252]   - Booting up control plane ...
	I1013 21:58:09.066628  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:58:09.066718  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:58:09.066793  162695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:58:09.081225  162695 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:58:09.081568  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:58:09.089218  162695 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:58:09.089606  162695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:58:09.089841  162695 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:58:09.230473  162695 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:58:09.230600  162695 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:58:10.231865  162695 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001420521s
	I1013 21:58:10.235771  162695 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:58:10.235897  162695 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 21:58:10.235995  162695 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:58:10.236082  162695 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:59:33.751476  168487 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000135873s
	I1013 21:59:33.752432  168487 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00099327s
	I1013 21:59:33.752806  168487 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001029388s
	I1013 21:59:33.752963  168487 kubeadm.go:318] 
	I1013 21:59:33.753074  168487 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 21:59:33.753163  168487 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 21:59:33.753268  168487 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 21:59:33.753528  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 21:59:33.753612  168487 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 21:59:33.753693  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 21:59:33.753697  168487 kubeadm.go:318] 
	I1013 21:59:33.757807  168487 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 21:59:33.758058  168487 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 21:59:33.758174  168487 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 21:59:33.758800  168487 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1013 21:59:33.758875  168487 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1013 21:59:33.759014  168487 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001671261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000135873s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00099327s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001029388s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1013 21:59:33.759102  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1013 21:59:34.303465  168487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:59:34.317170  168487 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:59:34.317238  168487 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:59:34.324957  168487 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:59:34.324977  168487 kubeadm.go:157] found existing configuration files:
	
	I1013 21:59:34.325025  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:59:34.332570  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:59:34.332630  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:59:34.339699  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:59:34.347238  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:59:34.347304  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:59:34.354371  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:59:34.362436  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:59:34.362553  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:59:34.369962  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:59:34.377605  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:59:34.377675  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:59:34.384969  168487 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:59:34.422161  168487 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:59:34.422218  168487 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:59:34.444299  168487 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:59:34.444390  168487 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:59:34.444428  168487 kubeadm.go:318] OS: Linux
	I1013 21:59:34.444475  168487 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:59:34.444526  168487 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:59:34.444575  168487 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:59:34.444625  168487 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:59:34.444675  168487 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:59:34.444725  168487 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:59:34.444775  168487 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:59:34.444826  168487 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:59:34.444873  168487 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:59:34.511091  168487 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:59:34.511206  168487 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:59:34.511301  168487 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:59:34.521630  168487 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:59:34.528626  168487 out.go:252]   - Generating certificates and keys ...
	I1013 21:59:34.528750  168487 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:59:34.528839  168487 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:59:34.528931  168487 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1013 21:59:34.529006  168487 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1013 21:59:34.529115  168487 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1013 21:59:34.529185  168487 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1013 21:59:34.529262  168487 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1013 21:59:34.529339  168487 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1013 21:59:34.529830  168487 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1013 21:59:34.530224  168487 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1013 21:59:34.530530  168487 kubeadm.go:318] [certs] Using the existing "sa" key
	I1013 21:59:34.530671  168487 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:59:34.906021  168487 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:59:35.275968  168487 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:59:35.539035  168487 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:59:36.429993  168487 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:59:36.649437  168487 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:59:36.650188  168487 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:59:36.652886  168487 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:59:36.656085  168487 out.go:252]   - Booting up control plane ...
	I1013 21:59:36.656194  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:59:36.656286  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:59:36.657293  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:59:36.675918  168487 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:59:36.676047  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:59:36.682835  168487 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:59:36.683145  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:59:36.683350  168487 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:59:36.844078  168487 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:59:36.844220  168487 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:59:37.364816  168487 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 521.206463ms
	I1013 21:59:37.368814  168487 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:59:37.368929  168487 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 21:59:37.369033  168487 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:59:37.369141  168487 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:02:10.236756  162695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	I1013 22:02:10.237955  162695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	I1013 22:02:10.238057  162695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	I1013 22:02:10.238071  162695 kubeadm.go:318] 
	I1013 22:02:10.238167  162695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 22:02:10.238256  162695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 22:02:10.238350  162695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 22:02:10.238452  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 22:02:10.238531  162695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 22:02:10.238644  162695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 22:02:10.238654  162695 kubeadm.go:318] 
	I1013 22:02:10.243041  162695 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:02:10.243279  162695 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:02:10.243394  162695 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:02:10.244013  162695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1013 22:02:10.244089  162695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1013 22:02:10.244142  162695 kubeadm.go:402] duration metric: took 8m16.80221698s to StartCluster
	I1013 22:02:10.244178  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:02:10.244238  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:02:10.268351  162695 cri.go:89] found id: ""
	I1013 22:02:10.268383  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.268391  162695 logs.go:284] No container was found matching "kube-apiserver"
	I1013 22:02:10.268399  162695 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:02:10.268452  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:02:10.291966  162695 cri.go:89] found id: ""
	I1013 22:02:10.291989  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.291997  162695 logs.go:284] No container was found matching "etcd"
	I1013 22:02:10.292004  162695 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:02:10.292062  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:02:10.316217  162695 cri.go:89] found id: ""
	I1013 22:02:10.316241  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.316258  162695 logs.go:284] No container was found matching "coredns"
	I1013 22:02:10.316264  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:02:10.316337  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:02:10.340887  162695 cri.go:89] found id: ""
	I1013 22:02:10.340910  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.340919  162695 logs.go:284] No container was found matching "kube-scheduler"
	I1013 22:02:10.340925  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:02:10.340979  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:02:10.364292  162695 cri.go:89] found id: ""
	I1013 22:02:10.364314  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.364322  162695 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:02:10.364328  162695 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:02:10.364386  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:02:10.389001  162695 cri.go:89] found id: ""
	I1013 22:02:10.389073  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.389085  162695 logs.go:284] No container was found matching "kube-controller-manager"
	I1013 22:02:10.389101  162695 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:02:10.389163  162695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:02:10.420029  162695 cri.go:89] found id: ""
	I1013 22:02:10.420054  162695 logs.go:282] 0 containers: []
	W1013 22:02:10.420062  162695 logs.go:284] No container was found matching "kindnet"
	I1013 22:02:10.420072  162695 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:02:10.420083  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:02:10.486215  162695 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1013 22:02:10.477965    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.479122    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480202    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480857    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.482280    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1013 22:02:10.477965    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.479122    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480202    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.480857    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:10.482280    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:02:10.486247  162695 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:02:10.486274  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:02:10.560538  162695 logs.go:123] Gathering logs for container status ...
	I1013 22:02:10.560573  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1013 22:02:10.587995  162695 logs.go:123] Gathering logs for kubelet ...
	I1013 22:02:10.588022  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:02:10.675720  162695 logs.go:123] Gathering logs for dmesg ...
	I1013 22:02:10.675758  162695 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1013 22:02:10.690101  162695 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1013 22:02:10.690156  162695 out.go:285] * 
	W1013 22:02:10.690336  162695 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:02:10.690364  162695 out.go:285] * 
	W1013 22:02:10.692775  162695 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:02:10.700237  162695 out.go:203] 
	W1013 22:02:10.703128  162695 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001420521s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000958405s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000820647s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623434s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:02:10.703164  162695 out.go:285] * 
	I1013 22:02:10.706146  162695 out.go:203] 
	
	
	==> CRI-O <==
	Oct 13 22:02:05 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:05.805863306Z" level=info msg="createCtr: removing container a6ec6200fe3b708726bffd16d2752dd01d7f7a66fa8a1493798525c02bc26d08" id=9098838b-cab7-4c97-a3f6-17f9458b09d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:05 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:05.805894928Z" level=info msg="createCtr: deleting container a6ec6200fe3b708726bffd16d2752dd01d7f7a66fa8a1493798525c02bc26d08 from storage" id=9098838b-cab7-4c97-a3f6-17f9458b09d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:05 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:05.80853858Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-257205_kube-system_42da5a6e43ce8725efe5cc09b44aa29f_0" id=9098838b-cab7-4c97-a3f6-17f9458b09d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.78769559Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6b65ad00-d2c5-4d79-964e-55cfb322f79a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.788625101Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=273e2d53-96c2-42b6-bd16-06b8b2b37fb1 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.789877305Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-flag-257205/kube-controller-manager" id=3b150434-2815-4934-9413-ad9efe074ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.790237296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.79470936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.795310518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.813131271Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3b150434-2815-4934-9413-ad9efe074ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.814479201Z" level=info msg="createCtr: deleting container ID f1ab01c2731995ce47ce5ae41412422272dc96cdacc3a28a7b1cfbd6cc7c6015 from idIndex" id=3b150434-2815-4934-9413-ad9efe074ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.814526724Z" level=info msg="createCtr: removing container f1ab01c2731995ce47ce5ae41412422272dc96cdacc3a28a7b1cfbd6cc7c6015" id=3b150434-2815-4934-9413-ad9efe074ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.814562005Z" level=info msg="createCtr: deleting container f1ab01c2731995ce47ce5ae41412422272dc96cdacc3a28a7b1cfbd6cc7c6015 from storage" id=3b150434-2815-4934-9413-ad9efe074ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:09 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:09.819535283Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-257205_kube-system_a855d4fe2f72668620e6af2dd0775ed9_0" id=3b150434-2815-4934-9413-ad9efe074ed0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.788606348Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=fa4e9a0d-0162-48ce-b3c6-3e9dfd198347 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.78938251Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=0161154a-c303-4875-998d-40aaf7dcd6ff name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.790206219Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-flag-257205/kube-apiserver" id=0ce98739-4e2a-45ac-83a6-30fdaa45bd2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.790422887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.800370444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.800875876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.809728575Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0ce98739-4e2a-45ac-83a6-30fdaa45bd2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.810824171Z" level=info msg="createCtr: deleting container ID 90b4663f804b0d6ce03cb7d9b168363f543cf138545fac31fa5d01b21b6849f7 from idIndex" id=0ce98739-4e2a-45ac-83a6-30fdaa45bd2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.810862439Z" level=info msg="createCtr: removing container 90b4663f804b0d6ce03cb7d9b168363f543cf138545fac31fa5d01b21b6849f7" id=0ce98739-4e2a-45ac-83a6-30fdaa45bd2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.810893888Z" level=info msg="createCtr: deleting container 90b4663f804b0d6ce03cb7d9b168363f543cf138545fac31fa5d01b21b6849f7 from storage" id=0ce98739-4e2a-45ac-83a6-30fdaa45bd2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:02:10 force-systemd-flag-257205 crio[838]: time="2025-10-13T22:02:10.816891043Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-flag-257205_kube-system_58c604058f0c444a9c3694f8c84ccbb8_0" id=0ce98739-4e2a-45ac-83a6-30fdaa45bd2f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1013 22:02:11.997082    2512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:11.997737    2512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:11.999305    2512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:11.999971    2512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:02:12.001853    2512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +4.197577] overlayfs: idmapped layers are currently not supported
	[Oct13 21:29] overlayfs: idmapped layers are currently not supported
	[ +40.174368] overlayfs: idmapped layers are currently not supported
	[Oct13 21:30] hrtimer: interrupt took 51471165 ns
	[Oct13 21:31] overlayfs: idmapped layers are currently not supported
	[Oct13 21:36] overlayfs: idmapped layers are currently not supported
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:02:12 up  1:44,  0 user,  load average: 0.25, 0.83, 1.59
	Linux force-systemd-flag-257205 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 13 22:02:05 force-systemd-flag-257205 kubelet[1807]:         container etcd start failed in pod etcd-force-systemd-flag-257205_kube-system(42da5a6e43ce8725efe5cc09b44aa29f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:02:05 force-systemd-flag-257205 kubelet[1807]:  > logger="UnhandledError"
	Oct 13 22:02:05 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:05.808940    1807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-flag-257205" podUID="42da5a6e43ce8725efe5cc09b44aa29f"
	Oct 13 22:02:06 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:06.432985    1807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-257205?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 13 22:02:06 force-systemd-flag-257205 kubelet[1807]: I1013 22:02:06.601794    1807 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-257205"
	Oct 13 22:02:06 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:06.602170    1807 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-flag-257205"
	Oct 13 22:02:06 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:06.994389    1807 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dforce-systemd-flag-257205&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:09.787247    1807 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-257205\" not found" node="force-systemd-flag-257205"
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:09.820970    1807 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]:  > podSandboxID="f8b952300f74e14280688ee69bb401c459e68996fc99e67fe0bbc4f61f086616"
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:09.821059    1807 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-flag-257205_kube-system(a855d4fe2f72668620e6af2dd0775ed9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]:  > logger="UnhandledError"
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:09.821100    1807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-flag-257205" podUID="a855d4fe2f72668620e6af2dd0775ed9"
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:09.845173    1807 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-257205\" not found"
	Oct 13 22:02:09 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:09.913963    1807 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-257205.186e2bd30d704cc6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-257205,UID:force-systemd-flag-257205,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-257205 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-257205,},FirstTimestamp:2025-10-13 21:58:09.811999942 +0000 UTC m=+0.591489924,LastTimestamp:2025-10-13 21:58:09.811999942 +0000 UTC m=+0.591489924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-257205,}"
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:10.788184    1807 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-257205\" not found" node="force-systemd-flag-257205"
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:10.817156    1807 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]:  > podSandboxID="f47cf5f2175004e3cf59ad0fc25ca1906c9a15a82f874ffee54a7106403a3463"
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:10.817311    1807 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-flag-257205_kube-system(58c604058f0c444a9c3694f8c84ccbb8): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]:  > logger="UnhandledError"
	Oct 13 22:02:10 force-systemd-flag-257205 kubelet[1807]: E1013 22:02:10.817351    1807 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-flag-257205" podUID="58c604058f0c444a9c3694f8c84ccbb8"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-257205 -n force-systemd-flag-257205
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-257205 -n force-systemd-flag-257205: exit status 6 (321.787201ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 22:02:12.453110  172618 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-257205" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-257205" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-257205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-257205
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-257205: (2.279597851s)
--- FAIL: TestForceSystemdFlag (517.69s)

                                                
                                    
x
+
TestForceSystemdEnv (511.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1013 21:56:33.896560    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:56:50.824362    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:56:54.314524    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:58:51.249593    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:01:50.826016    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m27.700631505s)

                                                
                                                
-- stdout --
	* [force-systemd-env-312094] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-312094" primary control-plane node in "force-systemd-env-312094" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:55:10.209393  168487 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:55:10.209555  168487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:55:10.209585  168487 out.go:374] Setting ErrFile to fd 2...
	I1013 21:55:10.209604  168487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:55:10.209890  168487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:55:10.210312  168487 out.go:368] Setting JSON to false
	I1013 21:55:10.211160  168487 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5845,"bootTime":1760386666,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:55:10.211255  168487 start.go:141] virtualization:  
	I1013 21:55:10.214693  168487 out.go:179] * [force-systemd-env-312094] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:55:10.217956  168487 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:55:10.218026  168487 notify.go:220] Checking for updates...
	I1013 21:55:10.224770  168487 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:55:10.227746  168487 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:55:10.230601  168487 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:55:10.233390  168487 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:55:10.236318  168487 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1013 21:55:10.239738  168487 config.go:182] Loaded profile config "force-systemd-flag-257205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:55:10.239876  168487 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:55:10.264210  168487 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:55:10.264321  168487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:55:10.327581  168487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:55:10.317553311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:55:10.328647  168487 docker.go:318] overlay module found
	I1013 21:55:10.331690  168487 out.go:179] * Using the docker driver based on user configuration
	I1013 21:55:10.334560  168487 start.go:305] selected driver: docker
	I1013 21:55:10.334578  168487 start.go:925] validating driver "docker" against <nil>
	I1013 21:55:10.334593  168487 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:55:10.335334  168487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:55:10.385291  168487 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:55:10.376888092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:55:10.385450  168487 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 21:55:10.385671  168487 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 21:55:10.388604  168487 out.go:179] * Using Docker driver with root privileges
	I1013 21:55:10.391406  168487 cni.go:84] Creating CNI manager for ""
	I1013 21:55:10.391470  168487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:55:10.391483  168487 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 21:55:10.391557  168487 start.go:349] cluster config:
	{Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:55:10.394684  168487 out.go:179] * Starting "force-systemd-env-312094" primary control-plane node in "force-systemd-env-312094" cluster
	I1013 21:55:10.397458  168487 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:55:10.400437  168487 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 21:55:10.403173  168487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:55:10.403222  168487 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 21:55:10.403235  168487 cache.go:58] Caching tarball of preloaded images
	I1013 21:55:10.403263  168487 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 21:55:10.403312  168487 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 21:55:10.403321  168487 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:55:10.403442  168487 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/config.json ...
	I1013 21:55:10.403460  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/config.json: {Name:mk12f3078fd64eaff5310ce92c9a156a90779f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:10.421994  168487 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 21:55:10.422017  168487 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 21:55:10.422041  168487 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:55:10.422062  168487 start.go:360] acquireMachinesLock for force-systemd-env-312094: {Name:mke65a331adc28a1288932bab33b66b5316bb30f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:55:10.422183  168487 start.go:364] duration metric: took 95.365µs to acquireMachinesLock for "force-systemd-env-312094"
	I1013 21:55:10.422210  168487 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:55:10.422274  168487 start.go:125] createHost starting for "" (driver="docker")
	I1013 21:55:10.425556  168487 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 21:55:10.425768  168487 start.go:159] libmachine.API.Create for "force-systemd-env-312094" (driver="docker")
	I1013 21:55:10.425812  168487 client.go:168] LocalClient.Create starting
	I1013 21:55:10.425877  168487 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 21:55:10.425913  168487 main.go:141] libmachine: Decoding PEM data...
	I1013 21:55:10.425930  168487 main.go:141] libmachine: Parsing certificate...
	I1013 21:55:10.425982  168487 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 21:55:10.426003  168487 main.go:141] libmachine: Decoding PEM data...
	I1013 21:55:10.426021  168487 main.go:141] libmachine: Parsing certificate...
	I1013 21:55:10.426376  168487 cli_runner.go:164] Run: docker network inspect force-systemd-env-312094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 21:55:10.442347  168487 cli_runner.go:211] docker network inspect force-systemd-env-312094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 21:55:10.442437  168487 network_create.go:284] running [docker network inspect force-systemd-env-312094] to gather additional debugging logs...
	I1013 21:55:10.442459  168487 cli_runner.go:164] Run: docker network inspect force-systemd-env-312094
	W1013 21:55:10.458174  168487 cli_runner.go:211] docker network inspect force-systemd-env-312094 returned with exit code 1
	I1013 21:55:10.458206  168487 network_create.go:287] error running [docker network inspect force-systemd-env-312094]: docker network inspect force-systemd-env-312094: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-312094 not found
	I1013 21:55:10.458220  168487 network_create.go:289] output of [docker network inspect force-systemd-env-312094]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-312094 not found
	
	** /stderr **
	I1013 21:55:10.458322  168487 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:55:10.474220  168487 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 21:55:10.474548  168487 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 21:55:10.474888  168487 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 21:55:10.475091  168487 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fba820387dad IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:f6:b2:86:7d:91} reservation:<nil>}
	I1013 21:55:10.475504  168487 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9960}
	I1013 21:55:10.475533  168487 network_create.go:124] attempt to create docker network force-systemd-env-312094 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 21:55:10.475599  168487 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-312094 force-systemd-env-312094
	I1013 21:55:10.533854  168487 network_create.go:108] docker network force-systemd-env-312094 192.168.85.0/24 created
	I1013 21:55:10.533888  168487 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-312094" container
	I1013 21:55:10.533964  168487 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 21:55:10.550637  168487 cli_runner.go:164] Run: docker volume create force-systemd-env-312094 --label name.minikube.sigs.k8s.io=force-systemd-env-312094 --label created_by.minikube.sigs.k8s.io=true
	I1013 21:55:10.568548  168487 oci.go:103] Successfully created a docker volume force-systemd-env-312094
	I1013 21:55:10.568640  168487 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-312094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-312094 --entrypoint /usr/bin/test -v force-systemd-env-312094:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 21:55:11.080567  168487 oci.go:107] Successfully prepared a docker volume force-systemd-env-312094
	I1013 21:55:11.080623  168487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:55:11.080643  168487 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 21:55:11.080711  168487 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-312094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 21:55:15.510059  168487 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-312094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.429305399s)
	I1013 21:55:15.510106  168487 kic.go:203] duration metric: took 4.429452071s to extract preloaded images to volume ...
	W1013 21:55:15.510255  168487 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 21:55:15.510403  168487 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 21:55:15.563811  168487 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-312094 --name force-systemd-env-312094 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-312094 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-312094 --network force-systemd-env-312094 --ip 192.168.85.2 --volume force-systemd-env-312094:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 21:55:15.855559  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Running}}
	I1013 21:55:15.877170  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Status}}
	I1013 21:55:15.899839  168487 cli_runner.go:164] Run: docker exec force-systemd-env-312094 stat /var/lib/dpkg/alternatives/iptables
	I1013 21:55:15.954596  168487 oci.go:144] the created container "force-systemd-env-312094" has a running status.
	I1013 21:55:15.954639  168487 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa...
	I1013 21:55:17.334455  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1013 21:55:17.334506  168487 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 21:55:17.373427  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Status}}
	I1013 21:55:17.396068  168487 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 21:55:17.396090  168487 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-312094 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 21:55:17.446457  168487 cli_runner.go:164] Run: docker container inspect force-systemd-env-312094 --format={{.State.Status}}
	I1013 21:55:17.466451  168487 machine.go:93] provisionDockerMachine start ...
	I1013 21:55:17.466548  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:17.493366  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:17.493714  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:17.493731  168487 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:55:17.707388  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-312094
	
	I1013 21:55:17.707452  168487 ubuntu.go:182] provisioning hostname "force-systemd-env-312094"
	I1013 21:55:17.707547  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:17.730811  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:17.731107  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:17.731124  168487 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-312094 && echo "force-systemd-env-312094" | sudo tee /etc/hostname
	I1013 21:55:17.895908  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-312094
	
	I1013 21:55:17.895983  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:17.917366  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:17.917679  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:17.917704  168487 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-312094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-312094/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-312094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:55:18.072426  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:55:18.072454  168487 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 21:55:18.072479  168487 ubuntu.go:190] setting up certificates
	I1013 21:55:18.072494  168487 provision.go:84] configureAuth start
	I1013 21:55:18.072555  168487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-312094
	I1013 21:55:18.089826  168487 provision.go:143] copyHostCerts
	I1013 21:55:18.089877  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:55:18.089913  168487 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 21:55:18.089924  168487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:55:18.090000  168487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 21:55:18.090084  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:55:18.090105  168487 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 21:55:18.090110  168487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:55:18.090140  168487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 21:55:18.090193  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:55:18.090216  168487 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 21:55:18.090224  168487 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:55:18.090249  168487 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 21:55:18.090300  168487 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-312094 san=[127.0.0.1 192.168.85.2 force-systemd-env-312094 localhost minikube]
	I1013 21:55:18.535444  168487 provision.go:177] copyRemoteCerts
	I1013 21:55:18.535507  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:55:18.535548  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:18.553806  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:18.656574  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1013 21:55:18.656644  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 21:55:18.680171  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1013 21:55:18.680237  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1013 21:55:18.701252  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1013 21:55:18.701316  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 21:55:18.719004  168487 provision.go:87] duration metric: took 646.495422ms to configureAuth
	I1013 21:55:18.719028  168487 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:55:18.719206  168487 config.go:182] Loaded profile config "force-systemd-env-312094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:55:18.719301  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:18.735675  168487 main.go:141] libmachine: Using SSH client type: native
	I1013 21:55:18.736019  168487 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33036 <nil> <nil>}
	I1013 21:55:18.736041  168487 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:55:18.990175  168487 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:55:18.990200  168487 machine.go:96] duration metric: took 1.52372643s to provisionDockerMachine
	I1013 21:55:18.990211  168487 client.go:171] duration metric: took 8.564390458s to LocalClient.Create
	I1013 21:55:18.990225  168487 start.go:167] duration metric: took 8.564457779s to libmachine.API.Create "force-systemd-env-312094"
	I1013 21:55:18.990243  168487 start.go:293] postStartSetup for "force-systemd-env-312094" (driver="docker")
	I1013 21:55:18.990258  168487 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:55:18.990332  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:55:18.990378  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.008768  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.111223  168487 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:55:19.114106  168487 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:55:19.114134  168487 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:55:19.114145  168487 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 21:55:19.114197  168487 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 21:55:19.114280  168487 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 21:55:19.114287  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> /etc/ssl/certs/42992.pem
	I1013 21:55:19.114393  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 21:55:19.121274  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:55:19.137834  168487 start.go:296] duration metric: took 147.572273ms for postStartSetup
	I1013 21:55:19.138172  168487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-312094
	I1013 21:55:19.154438  168487 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/config.json ...
	I1013 21:55:19.154711  168487 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:55:19.154759  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.171889  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.268339  168487 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:55:19.272674  168487 start.go:128] duration metric: took 8.850384307s to createHost
	I1013 21:55:19.272695  168487 start.go:83] releasing machines lock for "force-systemd-env-312094", held for 8.850500693s
	I1013 21:55:19.272761  168487 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-312094
	I1013 21:55:19.288112  168487 ssh_runner.go:195] Run: cat /version.json
	I1013 21:55:19.288172  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.288414  168487 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:55:19.288474  168487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-312094
	I1013 21:55:19.304879  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.321736  168487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33036 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/force-systemd-env-312094/id_rsa Username:docker}
	I1013 21:55:19.411354  168487 ssh_runner.go:195] Run: systemctl --version
	I1013 21:55:19.500125  168487 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:55:19.537162  168487 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:55:19.541424  168487 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:55:19.541494  168487 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:55:19.569736  168487 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 21:55:19.569760  168487 start.go:495] detecting cgroup driver to use...
	I1013 21:55:19.569777  168487 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1013 21:55:19.569827  168487 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:55:19.586433  168487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:55:19.598863  168487 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:55:19.598920  168487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:55:19.616249  168487 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:55:19.634437  168487 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:55:19.750596  168487 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:55:19.880610  168487 docker.go:234] disabling docker service ...
	I1013 21:55:19.880687  168487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:55:19.901476  168487 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:55:19.914595  168487 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:55:20.036891  168487 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:55:20.167553  168487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:55:20.181171  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:55:20.195537  168487 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:55:20.195682  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.205227  168487 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1013 21:55:20.205297  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.215289  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.224608  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.233077  168487 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:55:20.242062  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.250878  168487 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.266141  168487 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:55:20.274870  168487 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:55:20.282244  168487 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:55:20.289533  168487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:55:20.412055  168487 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:55:20.540958  168487 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:55:20.541022  168487 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:55:20.544647  168487 start.go:563] Will wait 60s for crictl version
	I1013 21:55:20.544704  168487 ssh_runner.go:195] Run: which crictl
	I1013 21:55:20.548316  168487 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:55:20.576786  168487 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:55:20.576865  168487 ssh_runner.go:195] Run: crio --version
	I1013 21:55:20.607042  168487 ssh_runner.go:195] Run: crio --version
	I1013 21:55:20.639889  168487 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:55:20.642688  168487 cli_runner.go:164] Run: docker network inspect force-systemd-env-312094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:55:20.658309  168487 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 21:55:20.661976  168487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:55:20.671392  168487 kubeadm.go:883] updating cluster {Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:55:20.671519  168487 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:55:20.671578  168487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:55:20.702794  168487 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:55:20.702820  168487 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:55:20.702874  168487 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:55:20.727157  168487 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:55:20.727185  168487 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:55:20.727200  168487 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 21:55:20.727289  168487 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-312094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:55:20.727370  168487 ssh_runner.go:195] Run: crio config
	I1013 21:55:20.787960  168487 cni.go:84] Creating CNI manager for ""
	I1013 21:55:20.787984  168487 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:55:20.788005  168487 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:55:20.788051  168487 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-312094 NodeName:force-systemd-env-312094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:55:20.788195  168487 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-312094"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:55:20.788277  168487 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:55:20.795978  168487 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:55:20.796046  168487 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:55:20.803605  168487 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1013 21:55:20.816663  168487 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:55:20.830640  168487 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1013 21:55:20.844008  168487 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:55:20.847684  168487 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 21:55:20.857948  168487 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:55:20.965290  168487 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:55:20.984599  168487 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094 for IP: 192.168.85.2
	I1013 21:55:20.984617  168487 certs.go:195] generating shared ca certs ...
	I1013 21:55:20.984633  168487 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:20.984766  168487 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 21:55:20.984814  168487 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 21:55:20.984825  168487 certs.go:257] generating profile certs ...
	I1013 21:55:20.984876  168487 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.key
	I1013 21:55:20.984900  168487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.crt with IP's: []
	I1013 21:55:21.665898  168487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.crt ...
	I1013 21:55:21.665931  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.crt: {Name:mkb645617bc0016a6d27b316d0713d42f424694d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:21.666132  168487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.key ...
	I1013 21:55:21.666149  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/client.key: {Name:mka4844cb4446e07dcd41b696998746d1340a9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:21.666242  168487 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806
	I1013 21:55:21.666260  168487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 21:55:22.240537  168487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806 ...
	I1013 21:55:22.240571  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806: {Name:mk14f1f35b6766a12e05b4367d669736a1a562f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.240759  168487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806 ...
	I1013 21:55:22.240774  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806: {Name:mkc0d8a71e3811cfd845189e686f430ba8cf2306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.240865  168487 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt.a94f2806 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt
	I1013 21:55:22.240948  168487 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key.a94f2806 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key
	I1013 21:55:22.241011  168487 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key
	I1013 21:55:22.241031  168487 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt with IP's: []
	I1013 21:55:22.573226  168487 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt ...
	I1013 21:55:22.573257  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt: {Name:mk2d4a7cdac452f8079cc4adb92ad7a1987056ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.573437  168487 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key ...
	I1013 21:55:22.573451  168487 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key: {Name:mkb7eaa24982b101bec8ec39be6dad93e7dd60f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:55:22.573531  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1013 21:55:22.573555  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1013 21:55:22.573568  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1013 21:55:22.573585  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1013 21:55:22.573605  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1013 21:55:22.573621  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1013 21:55:22.573635  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1013 21:55:22.573650  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1013 21:55:22.573706  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 21:55:22.573744  168487 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 21:55:22.573756  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 21:55:22.573789  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 21:55:22.573816  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:55:22.573841  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 21:55:22.573886  168487 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:55:22.573919  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.573934  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem -> /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.573945  168487 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.574567  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:55:22.595405  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 21:55:22.614271  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:55:22.632885  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 21:55:22.652974  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1013 21:55:22.671598  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 21:55:22.688985  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:55:22.706468  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/force-systemd-env-312094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 21:55:22.723108  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:55:22.740830  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 21:55:22.758417  168487 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 21:55:22.775402  168487 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:55:22.788312  168487 ssh_runner.go:195] Run: openssl version
	I1013 21:55:22.794343  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:55:22.802875  168487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.806446  168487 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.806547  168487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:55:22.847299  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:55:22.855430  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 21:55:22.864056  168487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.867501  168487 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.867560  168487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 21:55:22.908801  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 21:55:22.917162  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 21:55:22.925356  168487 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.929014  168487 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.929085  168487 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 21:55:22.970421  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:55:22.978486  168487 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:55:22.981968  168487 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 21:55:22.982021  168487 kubeadm.go:400] StartCluster: {Name:force-systemd-env-312094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-312094 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:55:22.982093  168487 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:55:22.982156  168487 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:55:23.009697  168487 cri.go:89] found id: ""
	I1013 21:55:23.009822  168487 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:55:23.017771  168487 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:55:23.025340  168487 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:55:23.025441  168487 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:55:23.032780  168487 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:55:23.032801  168487 kubeadm.go:157] found existing configuration files:
	
	I1013 21:55:23.032851  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:55:23.040270  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:55:23.040335  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:55:23.047355  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:55:23.054596  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:55:23.054655  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:55:23.061807  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:55:23.069010  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:55:23.069081  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:55:23.076102  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:55:23.083321  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:55:23.083400  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:55:23.090670  168487 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:55:23.132512  168487 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:55:23.132939  168487 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:55:23.174136  168487 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:55:23.174223  168487 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:55:23.174265  168487 kubeadm.go:318] OS: Linux
	I1013 21:55:23.174320  168487 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:55:23.174379  168487 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:55:23.174436  168487 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:55:23.174494  168487 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:55:23.174554  168487 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:55:23.174609  168487 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:55:23.174663  168487 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:55:23.174728  168487 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:55:23.174784  168487 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:55:23.249030  168487 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:55:23.249165  168487 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:55:23.249277  168487 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:55:23.258872  168487 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:55:23.265396  168487 out.go:252]   - Generating certificates and keys ...
	I1013 21:55:23.265509  168487 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:55:23.265582  168487 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:55:24.478718  168487 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 21:55:25.020141  168487 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 21:55:25.948085  168487 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 21:55:26.319019  168487 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 21:55:26.604991  168487 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 21:55:26.605338  168487 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 21:55:26.969810  168487 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 21:55:26.970083  168487 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 21:55:27.410716  168487 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 21:55:28.474769  168487 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 21:55:29.088807  168487 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 21:55:29.088887  168487 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:55:29.538878  168487 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:55:29.892715  168487 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:55:31.051218  168487 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:55:31.998830  168487 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:55:32.588068  168487 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:55:32.589064  168487 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:55:32.593202  168487 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:55:32.596965  168487 out.go:252]   - Booting up control plane ...
	I1013 21:55:32.597071  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:55:32.597148  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:55:32.597214  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:55:32.611996  168487 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:55:32.612117  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:55:32.619857  168487 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:55:32.619968  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:55:32.620013  168487 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:55:32.746484  168487 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:55:32.746614  168487 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:55:33.747865  168487 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001671261s
	I1013 21:55:33.751195  168487 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:55:33.751490  168487 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 21:55:33.751585  168487 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:55:33.751894  168487 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 21:59:33.751476  168487 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000135873s
	I1013 21:59:33.752432  168487 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00099327s
	I1013 21:59:33.752806  168487 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001029388s
	I1013 21:59:33.752963  168487 kubeadm.go:318] 
	I1013 21:59:33.753074  168487 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 21:59:33.753163  168487 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 21:59:33.753268  168487 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 21:59:33.753528  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 21:59:33.753612  168487 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 21:59:33.753693  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 21:59:33.753697  168487 kubeadm.go:318] 
	I1013 21:59:33.757807  168487 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 21:59:33.758058  168487 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 21:59:33.758174  168487 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 21:59:33.758800  168487 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1013 21:59:33.758875  168487 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1013 21:59:33.759014  168487 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001671261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000135873s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00099327s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001029388s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-312094 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001671261s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000135873s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00099327s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001029388s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1013 21:59:33.759102  168487 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1013 21:59:34.303465  168487 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:59:34.317170  168487 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 21:59:34.317238  168487 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:59:34.324957  168487 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 21:59:34.324977  168487 kubeadm.go:157] found existing configuration files:
	
	I1013 21:59:34.325025  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 21:59:34.332570  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 21:59:34.332630  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 21:59:34.339699  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 21:59:34.347238  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 21:59:34.347304  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:59:34.354371  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 21:59:34.362436  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 21:59:34.362553  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:59:34.369962  168487 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 21:59:34.377605  168487 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 21:59:34.377675  168487 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:59:34.384969  168487 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 21:59:34.422161  168487 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 21:59:34.422218  168487 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 21:59:34.444299  168487 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 21:59:34.444390  168487 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 21:59:34.444428  168487 kubeadm.go:318] OS: Linux
	I1013 21:59:34.444475  168487 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 21:59:34.444526  168487 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 21:59:34.444575  168487 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 21:59:34.444625  168487 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 21:59:34.444675  168487 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 21:59:34.444725  168487 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 21:59:34.444775  168487 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 21:59:34.444826  168487 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 21:59:34.444873  168487 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 21:59:34.511091  168487 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 21:59:34.511206  168487 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 21:59:34.511301  168487 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 21:59:34.521630  168487 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 21:59:34.528626  168487 out.go:252]   - Generating certificates and keys ...
	I1013 21:59:34.528750  168487 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 21:59:34.528839  168487 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 21:59:34.528931  168487 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1013 21:59:34.529006  168487 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1013 21:59:34.529115  168487 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1013 21:59:34.529185  168487 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1013 21:59:34.529262  168487 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1013 21:59:34.529339  168487 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1013 21:59:34.529830  168487 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1013 21:59:34.530224  168487 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1013 21:59:34.530530  168487 kubeadm.go:318] [certs] Using the existing "sa" key
	I1013 21:59:34.530671  168487 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 21:59:34.906021  168487 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 21:59:35.275968  168487 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 21:59:35.539035  168487 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 21:59:36.429993  168487 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 21:59:36.649437  168487 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 21:59:36.650188  168487 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 21:59:36.652886  168487 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 21:59:36.656085  168487 out.go:252]   - Booting up control plane ...
	I1013 21:59:36.656194  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 21:59:36.656286  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 21:59:36.657293  168487 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 21:59:36.675918  168487 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 21:59:36.676047  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 21:59:36.682835  168487 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 21:59:36.683145  168487 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 21:59:36.683350  168487 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 21:59:36.844078  168487 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 21:59:36.844220  168487 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 21:59:37.364816  168487 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 521.206463ms
	I1013 21:59:37.368814  168487 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 21:59:37.368929  168487 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 21:59:37.369033  168487 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 21:59:37.369141  168487 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:03:37.369801  168487 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	I1013 22:03:37.369904  168487 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	I1013 22:03:37.370116  168487 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	I1013 22:03:37.370131  168487 kubeadm.go:318] 
	I1013 22:03:37.370226  168487 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 22:03:37.370317  168487 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 22:03:37.370410  168487 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 22:03:37.370626  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 22:03:37.370738  168487 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 22:03:37.370858  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 22:03:37.370869  168487 kubeadm.go:318] 
	I1013 22:03:37.375493  168487 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:03:37.375755  168487 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:03:37.375913  168487 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:03:37.376508  168487 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1013 22:03:37.376604  168487 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1013 22:03:37.376716  168487 kubeadm.go:402] duration metric: took 8m14.394697621s to StartCluster
	I1013 22:03:37.376750  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:03:37.376810  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:03:37.400872  168487 cri.go:89] found id: ""
	I1013 22:03:37.400909  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.400918  168487 logs.go:284] No container was found matching "kube-apiserver"
	I1013 22:03:37.400925  168487 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:03:37.400990  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:03:37.425598  168487 cri.go:89] found id: ""
	I1013 22:03:37.425622  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.425640  168487 logs.go:284] No container was found matching "etcd"
	I1013 22:03:37.425647  168487 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:03:37.425707  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:03:37.454447  168487 cri.go:89] found id: ""
	I1013 22:03:37.454473  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.454481  168487 logs.go:284] No container was found matching "coredns"
	I1013 22:03:37.454487  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:03:37.454555  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:03:37.479272  168487 cri.go:89] found id: ""
	I1013 22:03:37.479298  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.479307  168487 logs.go:284] No container was found matching "kube-scheduler"
	I1013 22:03:37.479314  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:03:37.479369  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:03:37.506114  168487 cri.go:89] found id: ""
	I1013 22:03:37.506137  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.506146  168487 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:03:37.506152  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:03:37.506230  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:03:37.530808  168487 cri.go:89] found id: ""
	I1013 22:03:37.530843  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.530852  168487 logs.go:284] No container was found matching "kube-controller-manager"
	I1013 22:03:37.530860  168487 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:03:37.530918  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:03:37.562038  168487 cri.go:89] found id: ""
	I1013 22:03:37.562059  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.562067  168487 logs.go:284] No container was found matching "kindnet"
	I1013 22:03:37.562076  168487 logs.go:123] Gathering logs for kubelet ...
	I1013 22:03:37.562087  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:03:37.654862  168487 logs.go:123] Gathering logs for dmesg ...
	I1013 22:03:37.654896  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:03:37.670117  168487 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:03:37.670142  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:03:37.739064  168487 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1013 22:03:37.730977    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.731668    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733236    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733690    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.735118    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1013 22:03:37.730977    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.731668    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733236    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733690    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.735118    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:03:37.739086  168487 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:03:37.739099  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:03:37.813093  168487 logs.go:123] Gathering logs for container status ...
	I1013 22:03:37.813127  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1013 22:03:37.842469  168487 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1013 22:03:37.842524  168487 out.go:285] * 
	* 
	W1013 22:03:37.842578  168487 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:03:37.842597  168487 out.go:285] * 
	* 
	W1013 22:03:37.844753  168487 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:03:37.850676  168487 out.go:203] 
	W1013 22:03:37.854421  168487 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:03:37.854450  168487 out.go:285] * 
	* 
	I1013 22:03:37.857625  168487 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-13 22:03:37.913367229 +0000 UTC m=+3942.748196407
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-312094
helpers_test.go:243: (dbg) docker inspect force-systemd-env-312094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aeee5492ffe64708f7fee9f548fc04f4749a7446f6926dad5ed97c2d450d9a06",
	        "Created": "2025-10-13T21:55:15.578330158Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 168903,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:55:15.64197355Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/aeee5492ffe64708f7fee9f548fc04f4749a7446f6926dad5ed97c2d450d9a06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aeee5492ffe64708f7fee9f548fc04f4749a7446f6926dad5ed97c2d450d9a06/hostname",
	        "HostsPath": "/var/lib/docker/containers/aeee5492ffe64708f7fee9f548fc04f4749a7446f6926dad5ed97c2d450d9a06/hosts",
	        "LogPath": "/var/lib/docker/containers/aeee5492ffe64708f7fee9f548fc04f4749a7446f6926dad5ed97c2d450d9a06/aeee5492ffe64708f7fee9f548fc04f4749a7446f6926dad5ed97c2d450d9a06-json.log",
	        "Name": "/force-systemd-env-312094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-312094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-312094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aeee5492ffe64708f7fee9f548fc04f4749a7446f6926dad5ed97c2d450d9a06",
	                "LowerDir": "/var/lib/docker/overlay2/d093888b6dd2bc853641b215065e895c428da1d83d8e99d2847e4bd52338051c-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d093888b6dd2bc853641b215065e895c428da1d83d8e99d2847e4bd52338051c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d093888b6dd2bc853641b215065e895c428da1d83d8e99d2847e4bd52338051c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d093888b6dd2bc853641b215065e895c428da1d83d8e99d2847e4bd52338051c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-312094",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-312094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-312094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-312094",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-312094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56fe69655297c4c20040c7bd284e94b194518adae6d2bde9f2f7794fb16e6a7f",
	            "SandboxKey": "/var/run/docker/netns/56fe69655297",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33036"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33037"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33040"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33038"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33039"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-312094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:dc:0b:2b:5f:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c2e4ce10b90944ad5c5e378bb992ac2e1bc52b1da364d335e1f65ed780e426b",
	                    "EndpointID": "89a050f33055513aecff057912573ae29b5917c08f0aab60ba83c10ccdaf6744",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-312094",
	                        "aeee5492ffe6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-312094 -n force-systemd-env-312094
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-312094 -n force-systemd-env-312094: exit status 6 (340.17438ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 22:03:38.274240  175644 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-312094" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-312094 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-122822 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status docker --all --full --no-pager                                      │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat docker --no-pager                                                      │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/docker/daemon.json                                                          │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo docker system info                                                                   │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cri-dockerd --version                                                                │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat containerd --no-pager                                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/containerd/config.toml                                                      │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo containerd config dump                                                               │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status crio --all --full --no-pager                                        │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat crio --no-pager                                                        │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo crio config                                                                          │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ delete  │ -p cilium-122822                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │ 13 Oct 25 21:55 UTC │
	│ start   │ -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ force-systemd-flag-257205 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-flag-257205                                                                               │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:02:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:02:14.790419  172955 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:02:14.790523  172955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:02:14.790527  172955 out.go:374] Setting ErrFile to fd 2...
	I1013 22:02:14.790531  172955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:02:14.790795  172955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:02:14.791213  172955 out.go:368] Setting JSON to false
	I1013 22:02:14.792188  172955 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6269,"bootTime":1760386666,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:02:14.792246  172955 start.go:141] virtualization:  
	I1013 22:02:14.796098  172955 out.go:179] * [cert-expiration-546667] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:02:14.800705  172955 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:02:14.800781  172955 notify.go:220] Checking for updates...
	I1013 22:02:14.807290  172955 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:02:14.810454  172955 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:02:14.813713  172955 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:02:14.816910  172955 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:02:14.820154  172955 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:02:14.823867  172955 config.go:182] Loaded profile config "force-systemd-env-312094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:02:14.823965  172955 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:02:14.853666  172955 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:02:14.853788  172955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:02:14.913929  172955 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:02:14.903882892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:02:14.914022  172955 docker.go:318] overlay module found
	I1013 22:02:14.919090  172955 out.go:179] * Using the docker driver based on user configuration
	I1013 22:02:14.921975  172955 start.go:305] selected driver: docker
	I1013 22:02:14.921984  172955 start.go:925] validating driver "docker" against <nil>
	I1013 22:02:14.922002  172955 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:02:14.922734  172955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:02:14.975641  172955 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:02:14.966112623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:02:14.975886  172955 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:02:14.976126  172955 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 22:02:14.979295  172955 out.go:179] * Using Docker driver with root privileges
	I1013 22:02:14.982184  172955 cni.go:84] Creating CNI manager for ""
	I1013 22:02:14.982243  172955 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:02:14.982250  172955 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:02:14.982324  172955 start.go:349] cluster config:
	{Name:cert-expiration-546667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-546667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:02:14.985493  172955 out.go:179] * Starting "cert-expiration-546667" primary control-plane node in "cert-expiration-546667" cluster
	I1013 22:02:14.988346  172955 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:02:14.991375  172955 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:02:14.994193  172955 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:02:14.994237  172955 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:02:14.994253  172955 cache.go:58] Caching tarball of preloaded images
	I1013 22:02:14.994300  172955 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:02:14.994356  172955 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:02:14.994364  172955 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:02:14.994463  172955 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/config.json ...
	I1013 22:02:14.994479  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/config.json: {Name:mk4c755acff10be744c288a0e8322bc6464ada6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:15.021328  172955 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:02:15.021342  172955 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:02:15.021365  172955 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:02:15.021388  172955 start.go:360] acquireMachinesLock for cert-expiration-546667: {Name:mkc1d99d19daca6e6392d545cce4b2775e99521a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:02:15.021505  172955 start.go:364] duration metric: took 102.069µs to acquireMachinesLock for "cert-expiration-546667"
	I1013 22:02:15.021548  172955 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-546667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-546667 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:02:15.021610  172955 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:02:15.025247  172955 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:02:15.025519  172955 start.go:159] libmachine.API.Create for "cert-expiration-546667" (driver="docker")
	I1013 22:02:15.025571  172955 client.go:168] LocalClient.Create starting
	I1013 22:02:15.025653  172955 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:02:15.025692  172955 main.go:141] libmachine: Decoding PEM data...
	I1013 22:02:15.025715  172955 main.go:141] libmachine: Parsing certificate...
	I1013 22:02:15.025778  172955 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:02:15.025796  172955 main.go:141] libmachine: Decoding PEM data...
	I1013 22:02:15.025805  172955 main.go:141] libmachine: Parsing certificate...
	I1013 22:02:15.026210  172955 cli_runner.go:164] Run: docker network inspect cert-expiration-546667 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:02:15.050485  172955 cli_runner.go:211] docker network inspect cert-expiration-546667 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:02:15.050566  172955 network_create.go:284] running [docker network inspect cert-expiration-546667] to gather additional debugging logs...
	I1013 22:02:15.050580  172955 cli_runner.go:164] Run: docker network inspect cert-expiration-546667
	W1013 22:02:15.066977  172955 cli_runner.go:211] docker network inspect cert-expiration-546667 returned with exit code 1
	I1013 22:02:15.066997  172955 network_create.go:287] error running [docker network inspect cert-expiration-546667]: docker network inspect cert-expiration-546667: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-546667 not found
	I1013 22:02:15.067007  172955 network_create.go:289] output of [docker network inspect cert-expiration-546667]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-546667 not found
	
	** /stderr **
	I1013 22:02:15.067109  172955 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:02:15.085295  172955 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:02:15.085620  172955 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:02:15.085920  172955 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:02:15.086325  172955 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a767d0}
	I1013 22:02:15.086339  172955 network_create.go:124] attempt to create docker network cert-expiration-546667 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:02:15.086399  172955 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-546667 cert-expiration-546667
	I1013 22:02:15.161377  172955 network_create.go:108] docker network cert-expiration-546667 192.168.76.0/24 created
	I1013 22:02:15.161400  172955 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-546667" container
	I1013 22:02:15.161474  172955 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:02:15.178327  172955 cli_runner.go:164] Run: docker volume create cert-expiration-546667 --label name.minikube.sigs.k8s.io=cert-expiration-546667 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:02:15.196928  172955 oci.go:103] Successfully created a docker volume cert-expiration-546667
	I1013 22:02:15.197020  172955 cli_runner.go:164] Run: docker run --rm --name cert-expiration-546667-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-546667 --entrypoint /usr/bin/test -v cert-expiration-546667:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:02:15.707625  172955 oci.go:107] Successfully prepared a docker volume cert-expiration-546667
	I1013 22:02:15.707667  172955 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:02:15.707684  172955 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:02:15.707747  172955 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-546667:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:02:20.054851  172955 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-546667:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.347061068s)
	I1013 22:02:20.054871  172955 kic.go:203] duration metric: took 4.347183388s to extract preloaded images to volume ...
	W1013 22:02:20.055010  172955 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:02:20.055123  172955 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:02:20.118462  172955 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-546667 --name cert-expiration-546667 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-546667 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-546667 --network cert-expiration-546667 --ip 192.168.76.2 --volume cert-expiration-546667:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:02:20.428743  172955 cli_runner.go:164] Run: docker container inspect cert-expiration-546667 --format={{.State.Running}}
	I1013 22:02:20.453569  172955 cli_runner.go:164] Run: docker container inspect cert-expiration-546667 --format={{.State.Status}}
	I1013 22:02:20.476454  172955 cli_runner.go:164] Run: docker exec cert-expiration-546667 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:02:20.527696  172955 oci.go:144] the created container "cert-expiration-546667" has a running status.
	I1013 22:02:20.527714  172955 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa...
	I1013 22:02:21.054921  172955 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:02:21.082652  172955 cli_runner.go:164] Run: docker container inspect cert-expiration-546667 --format={{.State.Status}}
	I1013 22:02:21.105713  172955 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:02:21.105724  172955 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-546667 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:02:21.163641  172955 cli_runner.go:164] Run: docker container inspect cert-expiration-546667 --format={{.State.Status}}
	I1013 22:02:21.189622  172955 machine.go:93] provisionDockerMachine start ...
	I1013 22:02:21.189721  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:21.214324  172955 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:21.214646  172955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1013 22:02:21.214654  172955 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:02:21.379332  172955 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-546667
	
	I1013 22:02:21.379345  172955 ubuntu.go:182] provisioning hostname "cert-expiration-546667"
	I1013 22:02:21.379414  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:21.406161  172955 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:21.406449  172955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1013 22:02:21.406458  172955 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-546667 && echo "cert-expiration-546667" | sudo tee /etc/hostname
	I1013 22:02:21.590492  172955 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-546667
	
	I1013 22:02:21.590567  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:21.613389  172955 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:21.613690  172955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1013 22:02:21.613708  172955 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-546667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-546667/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-546667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:02:21.760119  172955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:02:21.760136  172955 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:02:21.760165  172955 ubuntu.go:190] setting up certificates
	I1013 22:02:21.760181  172955 provision.go:84] configureAuth start
	I1013 22:02:21.760245  172955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-546667
	I1013 22:02:21.777094  172955 provision.go:143] copyHostCerts
	I1013 22:02:21.777146  172955 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:02:21.777153  172955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:02:21.777232  172955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:02:21.777323  172955 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:02:21.777326  172955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:02:21.777350  172955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:02:21.777398  172955 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:02:21.777402  172955 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:02:21.777423  172955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:02:21.777467  172955 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-546667 san=[127.0.0.1 192.168.76.2 cert-expiration-546667 localhost minikube]
	I1013 22:02:22.271985  172955 provision.go:177] copyRemoteCerts
	I1013 22:02:22.272047  172955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:02:22.272085  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:22.290647  172955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa Username:docker}
	I1013 22:02:22.391401  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:02:22.408086  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1013 22:02:22.425681  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:02:22.443205  172955 provision.go:87] duration metric: took 683.002843ms to configureAuth
	I1013 22:02:22.443222  172955 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:02:22.443402  172955 config.go:182] Loaded profile config "cert-expiration-546667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:02:22.443494  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:22.460583  172955 main.go:141] libmachine: Using SSH client type: native
	I1013 22:02:22.460884  172955 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1013 22:02:22.460897  172955 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:02:22.712218  172955 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:02:22.712230  172955 machine.go:96] duration metric: took 1.522597846s to provisionDockerMachine
	I1013 22:02:22.712260  172955 client.go:171] duration metric: took 7.686662975s to LocalClient.Create
	I1013 22:02:22.712271  172955 start.go:167] duration metric: took 7.686754986s to libmachine.API.Create "cert-expiration-546667"
	I1013 22:02:22.712277  172955 start.go:293] postStartSetup for "cert-expiration-546667" (driver="docker")
	I1013 22:02:22.712287  172955 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:02:22.712345  172955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:02:22.712382  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:22.729158  172955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa Username:docker}
	I1013 22:02:22.831527  172955 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:02:22.834763  172955 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:02:22.834782  172955 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:02:22.834794  172955 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:02:22.834848  172955 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:02:22.834920  172955 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:02:22.835029  172955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:02:22.842684  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:02:22.859620  172955 start.go:296] duration metric: took 147.330295ms for postStartSetup
	I1013 22:02:22.860071  172955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-546667
	I1013 22:02:22.876589  172955 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/config.json ...
	I1013 22:02:22.876871  172955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:02:22.876908  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:22.893456  172955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa Username:docker}
	I1013 22:02:22.992838  172955 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:02:22.997268  172955 start.go:128] duration metric: took 7.975646304s to createHost
	I1013 22:02:22.997282  172955 start.go:83] releasing machines lock for "cert-expiration-546667", held for 7.975769656s
	I1013 22:02:22.997343  172955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-546667
	I1013 22:02:23.014522  172955 ssh_runner.go:195] Run: cat /version.json
	I1013 22:02:23.014569  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:23.014855  172955 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:02:23.014905  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:23.037342  172955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa Username:docker}
	I1013 22:02:23.038116  172955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa Username:docker}
	I1013 22:02:23.226913  172955 ssh_runner.go:195] Run: systemctl --version
	I1013 22:02:23.233136  172955 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:02:23.268255  172955 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:02:23.272251  172955 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:02:23.272306  172955 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:02:23.299749  172955 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:02:23.299761  172955 start.go:495] detecting cgroup driver to use...
	I1013 22:02:23.299817  172955 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:02:23.299865  172955 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:02:23.315637  172955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:02:23.327634  172955 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:02:23.327697  172955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:02:23.345558  172955 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:02:23.363314  172955 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:02:23.481350  172955 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:02:23.605810  172955 docker.go:234] disabling docker service ...
	I1013 22:02:23.605862  172955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:02:23.628271  172955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:02:23.640731  172955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:02:23.758428  172955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:02:23.867836  172955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:02:23.880945  172955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:02:23.894672  172955 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:02:23.894737  172955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:23.903311  172955 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:02:23.903383  172955 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:23.912346  172955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:23.921133  172955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:23.931205  172955 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:02:23.939052  172955 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:23.947450  172955 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:23.960520  172955 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:02:23.970285  172955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:02:23.978584  172955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:02:23.985970  172955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:02:24.100011  172955 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:02:24.227542  172955 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:02:24.227599  172955 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:02:24.231065  172955 start.go:563] Will wait 60s for crictl version
	I1013 22:02:24.231117  172955 ssh_runner.go:195] Run: which crictl
	I1013 22:02:24.234380  172955 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:02:24.261889  172955 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:02:24.261972  172955 ssh_runner.go:195] Run: crio --version
	I1013 22:02:24.288953  172955 ssh_runner.go:195] Run: crio --version
	I1013 22:02:24.320824  172955 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:02:24.324360  172955 cli_runner.go:164] Run: docker network inspect cert-expiration-546667 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:02:24.340045  172955 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:02:24.343647  172955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:02:24.352860  172955 kubeadm.go:883] updating cluster {Name:cert-expiration-546667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-546667 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:02:24.352969  172955 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:02:24.353019  172955 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:02:24.388299  172955 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:02:24.388316  172955 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:02:24.388373  172955 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:02:24.412121  172955 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:02:24.412132  172955 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:02:24.412139  172955 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 22:02:24.412227  172955 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-546667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-546667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:02:24.412311  172955 ssh_runner.go:195] Run: crio config
	I1013 22:02:24.478737  172955 cni.go:84] Creating CNI manager for ""
	I1013 22:02:24.478747  172955 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:02:24.478766  172955 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:02:24.478788  172955 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-546667 NodeName:cert-expiration-546667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:02:24.478903  172955 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-546667"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:02:24.478971  172955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:02:24.486469  172955 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:02:24.486534  172955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:02:24.493784  172955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1013 22:02:24.505891  172955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:02:24.518735  172955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1013 22:02:24.531176  172955 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:02:24.534592  172955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:02:24.544292  172955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:02:24.655382  172955 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:02:24.671823  172955 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667 for IP: 192.168.76.2
	I1013 22:02:24.671833  172955 certs.go:195] generating shared ca certs ...
	I1013 22:02:24.671848  172955 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:24.671998  172955 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:02:24.672037  172955 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:02:24.672042  172955 certs.go:257] generating profile certs ...
	I1013 22:02:24.672092  172955 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/client.key
	I1013 22:02:24.672109  172955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/client.crt with IP's: []
	I1013 22:02:25.366290  172955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/client.crt ...
	I1013 22:02:25.366308  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/client.crt: {Name:mk8bad07a32edcadd67a532bbfc043528da96bfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:25.366515  172955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/client.key ...
	I1013 22:02:25.366523  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/client.key: {Name:mkacd8a76ba86ed43381e6820b35eef9fb8c4a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:25.366619  172955 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.key.5e481fd9
	I1013 22:02:25.366633  172955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.crt.5e481fd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:02:26.497380  172955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.crt.5e481fd9 ...
	I1013 22:02:26.497394  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.crt.5e481fd9: {Name:mk64323f17ec01f3c447118696acf2785a83622b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:26.497591  172955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.key.5e481fd9 ...
	I1013 22:02:26.497599  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.key.5e481fd9: {Name:mk4a8f870f29375f623a944c617a59f8b3b95e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:26.497684  172955 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.crt.5e481fd9 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.crt
	I1013 22:02:26.497757  172955 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.key.5e481fd9 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.key
	I1013 22:02:26.497807  172955 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.key
	I1013 22:02:26.497818  172955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.crt with IP's: []
	I1013 22:02:26.822048  172955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.crt ...
	I1013 22:02:26.822062  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.crt: {Name:mk16e3d56fb16a0eec8d1af1b061e11c97d7af67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:26.822238  172955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.key ...
	I1013 22:02:26.822245  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.key: {Name:mk4497e1fba58d487616c3190ebe767a57e87d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:26.822429  172955 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:02:26.822463  172955 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:02:26.822470  172955 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:02:26.822492  172955 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:02:26.822516  172955 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:02:26.822538  172955 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:02:26.822578  172955 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:02:26.823147  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:02:26.841858  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:02:26.859422  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:02:26.877985  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:02:26.894857  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1013 22:02:26.911682  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:02:26.929377  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:02:26.949731  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/cert-expiration-546667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:02:26.972508  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:02:26.992150  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:02:27.011701  172955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:02:27.030382  172955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:02:27.043009  172955 ssh_runner.go:195] Run: openssl version
	I1013 22:02:27.049190  172955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:02:27.057568  172955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:02:27.061118  172955 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:02:27.061173  172955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:02:27.101578  172955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:02:27.109616  172955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:02:27.117603  172955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:02:27.121391  172955 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:02:27.121441  172955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:02:27.164384  172955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:02:27.172717  172955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:02:27.180625  172955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:02:27.184327  172955 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:02:27.184386  172955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:02:27.225369  172955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:02:27.233688  172955 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:02:27.237098  172955 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:02:27.237139  172955 kubeadm.go:400] StartCluster: {Name:cert-expiration-546667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-546667 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:02:27.237198  172955 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:02:27.237262  172955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:02:27.269012  172955 cri.go:89] found id: ""
	I1013 22:02:27.269069  172955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:02:27.276627  172955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:02:27.283655  172955 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:02:27.283719  172955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:02:27.291108  172955 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:02:27.291116  172955 kubeadm.go:157] found existing configuration files:
	
	I1013 22:02:27.291165  172955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:02:27.298401  172955 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:02:27.298452  172955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:02:27.305256  172955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:02:27.312699  172955 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:02:27.312753  172955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:02:27.319743  172955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:02:27.327321  172955 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:02:27.327373  172955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:02:27.334638  172955 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:02:27.342620  172955 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:02:27.342677  172955 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:02:27.349735  172955 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:02:27.387716  172955 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:02:27.387767  172955 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:02:27.418484  172955 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:02:27.418552  172955 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:02:27.418587  172955 kubeadm.go:318] OS: Linux
	I1013 22:02:27.418633  172955 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:02:27.418683  172955 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:02:27.418731  172955 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:02:27.418788  172955 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:02:27.418837  172955 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:02:27.418888  172955 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:02:27.418936  172955 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:02:27.418985  172955 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:02:27.419032  172955 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:02:27.487413  172955 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:02:27.487564  172955 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:02:27.487681  172955 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:02:27.497161  172955 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:02:27.503318  172955 out.go:252]   - Generating certificates and keys ...
	I1013 22:02:27.503410  172955 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:02:27.503476  172955 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:02:28.929158  172955 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:02:29.154465  172955 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:02:29.438776  172955 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:02:30.596460  172955 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:02:30.705592  172955 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:02:30.705958  172955 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-546667 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:02:31.265852  172955 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:02:31.266000  172955 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-546667 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:02:31.781794  172955 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:02:32.278308  172955 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:02:33.743328  172955 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:02:33.743563  172955 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:02:34.076867  172955 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:02:34.747715  172955 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:02:35.020420  172955 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:02:35.297363  172955 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:02:36.418076  172955 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:02:36.418585  172955 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:02:36.421169  172955 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:02:36.424814  172955 out.go:252]   - Booting up control plane ...
	I1013 22:02:36.424928  172955 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:02:36.425008  172955 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:02:36.425098  172955 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:02:36.442998  172955 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:02:36.443105  172955 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:02:36.451730  172955 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:02:36.452034  172955 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:02:36.452078  172955 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:02:36.577591  172955 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:02:36.577707  172955 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:02:37.578892  172955 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001790297s
	I1013 22:02:37.584844  172955 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:02:37.584936  172955 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1013 22:02:37.585174  172955 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:02:37.585260  172955 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:02:40.739536  172955 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.154175033s
	I1013 22:02:42.427212  172955 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.84237559s
	I1013 22:02:44.087277  172955 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502177012s
	I1013 22:02:44.107497  172955 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:02:44.122148  172955 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:02:44.137366  172955 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:02:44.137612  172955 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-546667 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:02:44.149241  172955 kubeadm.go:318] [bootstrap-token] Using token: 2fruxx.bc6qysf5kdkk5lkg
	I1013 22:02:44.152080  172955 out.go:252]   - Configuring RBAC rules ...
	I1013 22:02:44.152200  172955 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:02:44.155829  172955 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:02:44.168428  172955 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:02:44.172338  172955 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:02:44.176022  172955 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:02:44.179899  172955 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:02:44.493444  172955 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:02:44.949583  172955 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:02:45.498630  172955 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:02:45.498641  172955 kubeadm.go:318] 
	I1013 22:02:45.498704  172955 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:02:45.498708  172955 kubeadm.go:318] 
	I1013 22:02:45.498788  172955 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:02:45.498791  172955 kubeadm.go:318] 
	I1013 22:02:45.498816  172955 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:02:45.498877  172955 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:02:45.498928  172955 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:02:45.498932  172955 kubeadm.go:318] 
	I1013 22:02:45.498987  172955 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:02:45.498990  172955 kubeadm.go:318] 
	I1013 22:02:45.499039  172955 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:02:45.499053  172955 kubeadm.go:318] 
	I1013 22:02:45.499107  172955 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:02:45.499184  172955 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:02:45.499260  172955 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:02:45.499264  172955 kubeadm.go:318] 
	I1013 22:02:45.499350  172955 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:02:45.499429  172955 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:02:45.499432  172955 kubeadm.go:318] 
	I1013 22:02:45.499518  172955 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2fruxx.bc6qysf5kdkk5lkg \
	I1013 22:02:45.499625  172955 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:02:45.499645  172955 kubeadm.go:318] 	--control-plane 
	I1013 22:02:45.499649  172955 kubeadm.go:318] 
	I1013 22:02:45.499736  172955 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:02:45.499740  172955 kubeadm.go:318] 
	I1013 22:02:45.499867  172955 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2fruxx.bc6qysf5kdkk5lkg \
	I1013 22:02:45.499973  172955 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:02:45.503251  172955 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:02:45.503477  172955 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:02:45.503586  172955 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:02:45.503601  172955 cni.go:84] Creating CNI manager for ""
	I1013 22:02:45.503608  172955 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:02:45.506839  172955 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:02:45.509764  172955 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:02:45.513843  172955 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:02:45.513854  172955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:02:45.528371  172955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:02:45.815097  172955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:02:45.815209  172955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:02:45.815292  172955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-546667 minikube.k8s.io/updated_at=2025_10_13T22_02_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=cert-expiration-546667 minikube.k8s.io/primary=true
	I1013 22:02:45.971212  172955 ops.go:34] apiserver oom_adj: -16
	I1013 22:02:45.971231  172955 kubeadm.go:1113] duration metric: took 156.072019ms to wait for elevateKubeSystemPrivileges
	I1013 22:02:45.971242  172955 kubeadm.go:402] duration metric: took 18.734107137s to StartCluster
	I1013 22:02:45.971256  172955 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:45.971312  172955 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:02:45.971992  172955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:02:45.972193  172955 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:02:45.972303  172955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:02:45.972543  172955 config.go:182] Loaded profile config "cert-expiration-546667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:02:45.972581  172955 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:02:45.972646  172955 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-546667"
	I1013 22:02:45.972660  172955 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-546667"
	I1013 22:02:45.972679  172955 host.go:66] Checking if "cert-expiration-546667" exists ...
	I1013 22:02:45.973424  172955 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-546667"
	I1013 22:02:45.973434  172955 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-546667"
	I1013 22:02:45.973668  172955 cli_runner.go:164] Run: docker container inspect cert-expiration-546667 --format={{.State.Status}}
	I1013 22:02:45.973783  172955 cli_runner.go:164] Run: docker container inspect cert-expiration-546667 --format={{.State.Status}}
	I1013 22:02:45.976920  172955 out.go:179] * Verifying Kubernetes components...
	I1013 22:02:45.986264  172955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:02:46.015912  172955 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:02:46.018893  172955 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:02:46.018904  172955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:02:46.018971  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:46.024099  172955 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-546667"
	I1013 22:02:46.024129  172955 host.go:66] Checking if "cert-expiration-546667" exists ...
	I1013 22:02:46.024554  172955 cli_runner.go:164] Run: docker container inspect cert-expiration-546667 --format={{.State.Status}}
	I1013 22:02:46.066872  172955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa Username:docker}
	I1013 22:02:46.072600  172955 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:02:46.072618  172955 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:02:46.072680  172955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-546667
	I1013 22:02:46.115145  172955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/cert-expiration-546667/id_rsa Username:docker}
	I1013 22:02:46.301195  172955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:02:46.339395  172955 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:02:46.341435  172955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:02:46.387553  172955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:02:46.709734  172955 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 22:02:46.711375  172955 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:02:46.711419  172955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:02:46.989049  172955 api_server.go:72] duration metric: took 1.016830925s to wait for apiserver process to appear ...
	I1013 22:02:46.989063  172955 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:02:46.989080  172955 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:02:46.992368  172955 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1013 22:02:46.995154  172955 addons.go:514] duration metric: took 1.02255986s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1013 22:02:47.000906  172955 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 22:02:47.002036  172955 api_server.go:141] control plane version: v1.34.1
	I1013 22:02:47.002054  172955 api_server.go:131] duration metric: took 12.98516ms to wait for apiserver health ...
	I1013 22:02:47.002062  172955 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:02:47.020752  172955 system_pods.go:59] 5 kube-system pods found
	I1013 22:02:47.020772  172955 system_pods.go:61] "etcd-cert-expiration-546667" [6a0d77b3-3502-49f3-bfe7-b3195bad8c36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:02:47.020780  172955 system_pods.go:61] "kube-apiserver-cert-expiration-546667" [87a60e41-40da-4018-8c93-76ce9525fc9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:02:47.020798  172955 system_pods.go:61] "kube-controller-manager-cert-expiration-546667" [4989fd2f-fdb9-4386-861c-838a020eabd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:02:47.020804  172955 system_pods.go:61] "kube-scheduler-cert-expiration-546667" [272f2de7-07df-41d4-bfa6-338ac534d5ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:02:47.020808  172955 system_pods.go:61] "storage-provisioner" [0cd3339d-4269-488b-b08c-6e49291d963e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:02:47.020813  172955 system_pods.go:74] duration metric: took 18.745921ms to wait for pod list to return data ...
	I1013 22:02:47.020823  172955 kubeadm.go:586] duration metric: took 1.048610703s to wait for: map[apiserver:true system_pods:true]
	I1013 22:02:47.020835  172955 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:02:47.023281  172955 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:02:47.023298  172955 node_conditions.go:123] node cpu capacity is 2
	I1013 22:02:47.023308  172955 node_conditions.go:105] duration metric: took 2.470209ms to run NodePressure ...
	I1013 22:02:47.023320  172955 start.go:241] waiting for startup goroutines ...
	I1013 22:02:47.214191  172955 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-546667" context rescaled to 1 replicas
	I1013 22:02:47.214217  172955 start.go:246] waiting for cluster config update ...
	I1013 22:02:47.214228  172955 start.go:255] writing updated cluster config ...
	I1013 22:02:47.214518  172955 ssh_runner.go:195] Run: rm -f paused
	I1013 22:02:47.281966  172955 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:02:47.285427  172955 out.go:179] * Done! kubectl is now configured to use "cert-expiration-546667" cluster and "default" namespace by default
	I1013 22:03:37.369801  168487 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	I1013 22:03:37.369904  168487 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	I1013 22:03:37.370116  168487 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	I1013 22:03:37.370131  168487 kubeadm.go:318] 
	I1013 22:03:37.370226  168487 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1013 22:03:37.370317  168487 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1013 22:03:37.370410  168487 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1013 22:03:37.370626  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1013 22:03:37.370738  168487 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1013 22:03:37.370858  168487 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1013 22:03:37.370869  168487 kubeadm.go:318] 
	I1013 22:03:37.375493  168487 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:03:37.375755  168487 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:03:37.375913  168487 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:03:37.376508  168487 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1013 22:03:37.376604  168487 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1013 22:03:37.376716  168487 kubeadm.go:402] duration metric: took 8m14.394697621s to StartCluster
	I1013 22:03:37.376750  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1013 22:03:37.376810  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1013 22:03:37.400872  168487 cri.go:89] found id: ""
	I1013 22:03:37.400909  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.400918  168487 logs.go:284] No container was found matching "kube-apiserver"
	I1013 22:03:37.400925  168487 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1013 22:03:37.400990  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1013 22:03:37.425598  168487 cri.go:89] found id: ""
	I1013 22:03:37.425622  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.425640  168487 logs.go:284] No container was found matching "etcd"
	I1013 22:03:37.425647  168487 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1013 22:03:37.425707  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1013 22:03:37.454447  168487 cri.go:89] found id: ""
	I1013 22:03:37.454473  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.454481  168487 logs.go:284] No container was found matching "coredns"
	I1013 22:03:37.454487  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1013 22:03:37.454555  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1013 22:03:37.479272  168487 cri.go:89] found id: ""
	I1013 22:03:37.479298  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.479307  168487 logs.go:284] No container was found matching "kube-scheduler"
	I1013 22:03:37.479314  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1013 22:03:37.479369  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1013 22:03:37.506114  168487 cri.go:89] found id: ""
	I1013 22:03:37.506137  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.506146  168487 logs.go:284] No container was found matching "kube-proxy"
	I1013 22:03:37.506152  168487 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1013 22:03:37.506230  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1013 22:03:37.530808  168487 cri.go:89] found id: ""
	I1013 22:03:37.530843  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.530852  168487 logs.go:284] No container was found matching "kube-controller-manager"
	I1013 22:03:37.530860  168487 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1013 22:03:37.530918  168487 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1013 22:03:37.562038  168487 cri.go:89] found id: ""
	I1013 22:03:37.562059  168487 logs.go:282] 0 containers: []
	W1013 22:03:37.562067  168487 logs.go:284] No container was found matching "kindnet"
	I1013 22:03:37.562076  168487 logs.go:123] Gathering logs for kubelet ...
	I1013 22:03:37.562087  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1013 22:03:37.654862  168487 logs.go:123] Gathering logs for dmesg ...
	I1013 22:03:37.654896  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1013 22:03:37.670117  168487 logs.go:123] Gathering logs for describe nodes ...
	I1013 22:03:37.670142  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1013 22:03:37.739064  168487 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1013 22:03:37.730977    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.731668    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733236    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733690    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.735118    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1013 22:03:37.730977    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.731668    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733236    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.733690    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:37.735118    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1013 22:03:37.739086  168487 logs.go:123] Gathering logs for CRI-O ...
	I1013 22:03:37.739099  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1013 22:03:37.813093  168487 logs.go:123] Gathering logs for container status ...
	I1013 22:03:37.813127  168487 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1013 22:03:37.842469  168487 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1013 22:03:37.842524  168487 out.go:285] * 
	W1013 22:03:37.842578  168487 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:03:37.842597  168487 out.go:285] * 
	W1013 22:03:37.844753  168487 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:03:37.850676  168487 out.go:203] 
	W1013 22:03:37.854421  168487 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 521.206463ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00002897s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000778727s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001248647s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1013 22:03:37.854450  168487 out.go:285] * 
	I1013 22:03:37.857625  168487 out.go:203] 
	
	
	==> CRI-O <==
	Oct 13 22:03:32 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:32.419141576Z" level=info msg="createCtr: removing container 1edab881257ebeadd178a355767b44ce0bbaa9629f025f2401b7091ba3eea2e0" id=1adba71c-0017-4b1d-a1cc-11ef8bbb1807 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:32 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:32.419233012Z" level=info msg="createCtr: deleting container 1edab881257ebeadd178a355767b44ce0bbaa9629f025f2401b7091ba3eea2e0 from storage" id=1adba71c-0017-4b1d-a1cc-11ef8bbb1807 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:32 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:32.424202276Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-312094_kube-system_5dd56c9281cb0111c489e81525948b00_0" id=1adba71c-0017-4b1d-a1cc-11ef8bbb1807 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.397659828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5c8f8475-e9bb-42ad-be50-adeb416f479d name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.398485867Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=782e71fd-0699-4815-9edd-6c6697e42d00 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.399382929Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-312094/kube-apiserver" id=11ad9603-384e-4949-85fe-ebbf8c1f495f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.399595346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.403954369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.404403318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.418558473Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=11ad9603-384e-4949-85fe-ebbf8c1f495f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.419589168Z" level=info msg="createCtr: deleting container ID c46aad7593b08ff7abf45550d5f9d2c4bbc48f7232a6c40289a35c3b123f1c69 from idIndex" id=11ad9603-384e-4949-85fe-ebbf8c1f495f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.41962632Z" level=info msg="createCtr: removing container c46aad7593b08ff7abf45550d5f9d2c4bbc48f7232a6c40289a35c3b123f1c69" id=11ad9603-384e-4949-85fe-ebbf8c1f495f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.419661552Z" level=info msg="createCtr: deleting container c46aad7593b08ff7abf45550d5f9d2c4bbc48f7232a6c40289a35c3b123f1c69 from storage" id=11ad9603-384e-4949-85fe-ebbf8c1f495f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:36 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:36.422341404Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-312094_kube-system_063b8e479e166bafce7c9765596324bf_0" id=11ad9603-384e-4949-85fe-ebbf8c1f495f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.398219472Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=eeef8579-e755-412b-a9f6-30e766bf2204 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.400077666Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=dd0d9735-649c-4dfa-8d20-82cc737f4ffe name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.401209708Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-env-312094/kube-controller-manager" id=66bae487-bd37-44ac-8ffd-ee2af399b1ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.401504782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.413517024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.414195899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.426388921Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=66bae487-bd37-44ac-8ffd-ee2af399b1ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.428841611Z" level=info msg="createCtr: deleting container ID 7f33e1eca0670184baa7fcaba09a9c3feb1359f6de9be096fc6820d7e4e0ede3 from idIndex" id=66bae487-bd37-44ac-8ffd-ee2af399b1ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.428988652Z" level=info msg="createCtr: removing container 7f33e1eca0670184baa7fcaba09a9c3feb1359f6de9be096fc6820d7e4e0ede3" id=66bae487-bd37-44ac-8ffd-ee2af399b1ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.429082278Z" level=info msg="createCtr: deleting container 7f33e1eca0670184baa7fcaba09a9c3feb1359f6de9be096fc6820d7e4e0ede3 from storage" id=66bae487-bd37-44ac-8ffd-ee2af399b1ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:03:38 force-systemd-env-312094 crio[838]: time="2025-10-13T22:03:38.437002838Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-312094_kube-system_0a32b71925571f1ca365b1781854f8fd_0" id=66bae487-bd37-44ac-8ffd-ee2af399b1ef name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1013 22:03:38.921012    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:38.921738    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:38.923447    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:38.923833    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1013 22:03:38.925082    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct13 21:29] overlayfs: idmapped layers are currently not supported
	[ +40.174368] overlayfs: idmapped layers are currently not supported
	[Oct13 21:30] hrtimer: interrupt took 51471165 ns
	[Oct13 21:31] overlayfs: idmapped layers are currently not supported
	[Oct13 21:36] overlayfs: idmapped layers are currently not supported
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:03:38 up  1:45,  0 user,  load average: 1.24, 1.15, 1.64
	Linux force-systemd-env-312094 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 13 22:03:32 force-systemd-env-312094 kubelet[1793]: E1013 22:03:32.424599    1793 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 13 22:03:32 force-systemd-env-312094 kubelet[1793]:         container etcd start failed in pod etcd-force-systemd-env-312094_kube-system(5dd56c9281cb0111c489e81525948b00): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:03:32 force-systemd-env-312094 kubelet[1793]:  > logger="UnhandledError"
	Oct 13 22:03:32 force-systemd-env-312094 kubelet[1793]: E1013 22:03:32.424626    1793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-312094" podUID="5dd56c9281cb0111c489e81525948b00"
	Oct 13 22:03:33 force-systemd-env-312094 kubelet[1793]: E1013 22:03:33.980769    1793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-312094.186e2be77166736a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-312094,UID:force-systemd-env-312094,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-312094 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-312094,},FirstTimestamp:2025-10-13 21:59:37.388421994 +0000 UTC m=+0.545518524,LastTimestamp:2025-10-13 21:59:37.388421994 +0000 UTC m=+0.545518524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-312094,}"
	Oct 13 22:03:33 force-systemd-env-312094 kubelet[1793]: E1013 22:03:33.994249    1793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-312094?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 13 22:03:34 force-systemd-env-312094 kubelet[1793]: I1013 22:03:34.182054    1793 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-312094"
	Oct 13 22:03:34 force-systemd-env-312094 kubelet[1793]: E1013 22:03:34.182547    1793 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-env-312094"
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]: E1013 22:03:36.397272    1793 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-312094\" not found" node="force-systemd-env-312094"
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]: E1013 22:03:36.422644    1793 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]:  > podSandboxID="de0ec43bb37a1e05cc6bc098d4ae1ad579774e83127873938e8d825f1237c417"
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]: E1013 22:03:36.422739    1793 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-312094_kube-system(063b8e479e166bafce7c9765596324bf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]:  > logger="UnhandledError"
	Oct 13 22:03:36 force-systemd-env-312094 kubelet[1793]: E1013 22:03:36.422769    1793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-312094" podUID="063b8e479e166bafce7c9765596324bf"
	Oct 13 22:03:37 force-systemd-env-312094 kubelet[1793]: E1013 22:03:37.420781    1793 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-312094\" not found"
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]: E1013 22:03:38.397585    1793 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-312094\" not found" node="force-systemd-env-312094"
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]: E1013 22:03:38.437359    1793 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]:  > podSandboxID="ca1deafefaa51f7127a01691f89966f4ff5b136ec7b1f71dc4f7251811079ddf"
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]: E1013 22:03:38.437461    1793 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-env-312094_kube-system(0a32b71925571f1ca365b1781854f8fd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]:  > logger="UnhandledError"
	Oct 13 22:03:38 force-systemd-env-312094 kubelet[1793]: E1013 22:03:38.437492    1793 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-env-312094" podUID="0a32b71925571f1ca365b1781854f8fd"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-312094 -n force-systemd-env-312094
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-312094 -n force-systemd-env-312094: exit status 6 (334.685685ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 22:03:39.365665  175860 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-312094" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-312094" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-312094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-312094
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-312094: (1.926358063s)
--- FAIL: TestForceSystemdEnv (511.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-192425 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-192425 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-tth7p" [46c69a2c-1831-4bb9-83d5-86e2d7f18b2d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-192425 -n functional-192425
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-13 21:19:00.993501757 +0000 UTC m=+1265.828330935
functional_test.go:1645: (dbg) Run:  kubectl --context functional-192425 describe po hello-node-connect-7d85dfc575-tth7p -n default
functional_test.go:1645: (dbg) kubectl --context functional-192425 describe po hello-node-connect-7d85dfc575-tth7p -n default:
Name:             hello-node-connect-7d85dfc575-tth7p
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-192425/192.168.49.2
Start Time:       Mon, 13 Oct 2025 21:09:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7gqqk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7gqqk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-tth7p to functional-192425
Normal   Pulling    7m2s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-192425 logs hello-node-connect-7d85dfc575-tth7p -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-192425 logs hello-node-connect-7d85dfc575-tth7p -n default: exit status 1 (100.810715ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-tth7p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-192425 logs hello-node-connect-7d85dfc575-tth7p -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-192425 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-tth7p
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-192425/192.168.49.2
Start Time:       Mon, 13 Oct 2025 21:09:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7gqqk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7gqqk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-tth7p to functional-192425
Normal   Pulling    7m2s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-192425 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-192425 logs -l app=hello-node-connect: exit status 1 (89.609997ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-tth7p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-192425 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-192425 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.63.87
IPs:                      10.101.63.87
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30381/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-192425
helpers_test.go:243: (dbg) docker inspect functional-192425:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5150b813e6641a9da587523a40498f8de202047329b013de9b2e4292912c6de5",
	        "Created": "2025-10-13T21:06:17.516316105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:06:17.579195488Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/5150b813e6641a9da587523a40498f8de202047329b013de9b2e4292912c6de5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5150b813e6641a9da587523a40498f8de202047329b013de9b2e4292912c6de5/hostname",
	        "HostsPath": "/var/lib/docker/containers/5150b813e6641a9da587523a40498f8de202047329b013de9b2e4292912c6de5/hosts",
	        "LogPath": "/var/lib/docker/containers/5150b813e6641a9da587523a40498f8de202047329b013de9b2e4292912c6de5/5150b813e6641a9da587523a40498f8de202047329b013de9b2e4292912c6de5-json.log",
	        "Name": "/functional-192425",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-192425:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-192425",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5150b813e6641a9da587523a40498f8de202047329b013de9b2e4292912c6de5",
	                "LowerDir": "/var/lib/docker/overlay2/4656847f8885f6a112768b531a88140af457583e59b760d9769e80afdd5351dc-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4656847f8885f6a112768b531a88140af457583e59b760d9769e80afdd5351dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4656847f8885f6a112768b531a88140af457583e59b760d9769e80afdd5351dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4656847f8885f6a112768b531a88140af457583e59b760d9769e80afdd5351dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-192425",
	                "Source": "/var/lib/docker/volumes/functional-192425/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-192425",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-192425",
	                "name.minikube.sigs.k8s.io": "functional-192425",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f17655e05aae7afe3aa5e1e5b87edf4572fdca51c14b610612f3e21f967300d",
	            "SandboxKey": "/var/run/docker/netns/3f17655e05aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-192425": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:e3:49:f5:8e:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dcdc198c4c302fa5f30e1546e0de530ad82ca98c7a5aeabaff35873b31fe2cdb",
	                    "EndpointID": "25492e9fb1c5a23cb0c580ab7c6110671bb9149e163498ef5d023791a5b20d26",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-192425",
	                        "5150b813e664"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-192425 -n functional-192425
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 logs -n 25: (1.450340585s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-192425 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                  │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ kubectl │ functional-192425 kubectl -- --context functional-192425 get pods                                                        │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ start   │ -p functional-192425 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                 │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ service │ invalid-svc -p functional-192425                                                                                         │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │                     │
	│ cp      │ functional-192425 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                       │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ config  │ functional-192425 config unset cpus                                                                                      │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ config  │ functional-192425 config get cpus                                                                                        │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │                     │
	│ config  │ functional-192425 config set cpus 2                                                                                      │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ config  │ functional-192425 config get cpus                                                                                        │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ ssh     │ functional-192425 ssh -n functional-192425 sudo cat /home/docker/cp-test.txt                                             │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ config  │ functional-192425 config unset cpus                                                                                      │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ config  │ functional-192425 config get cpus                                                                                        │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │                     │
	│ ssh     │ functional-192425 ssh echo hello                                                                                         │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ cp      │ functional-192425 cp functional-192425:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd64708362/001/cp-test.txt │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ ssh     │ functional-192425 ssh cat /etc/hostname                                                                                  │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ ssh     │ functional-192425 ssh -n functional-192425 sudo cat /home/docker/cp-test.txt                                             │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ tunnel  │ functional-192425 tunnel --alsologtostderr                                                                               │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │                     │
	│ tunnel  │ functional-192425 tunnel --alsologtostderr                                                                               │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │                     │
	│ cp      │ functional-192425 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ ssh     │ functional-192425 ssh -n functional-192425 sudo cat /tmp/does/not/exist/cp-test.txt                                      │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │ 13 Oct 25 21:08 UTC │
	│ tunnel  │ functional-192425 tunnel --alsologtostderr                                                                               │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:08 UTC │                     │
	│ addons  │ functional-192425 addons list                                                                                            │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:09 UTC │ 13 Oct 25 21:09 UTC │
	│ addons  │ functional-192425 addons list -o json                                                                                    │ functional-192425 │ jenkins │ v1.37.0 │ 13 Oct 25 21:09 UTC │ 13 Oct 25 21:09 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:08:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:08:04.852230   24285 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:08:04.852354   24285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:08:04.852357   24285 out.go:374] Setting ErrFile to fd 2...
	I1013 21:08:04.852361   24285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:08:04.852656   24285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:08:04.853003   24285 out.go:368] Setting JSON to false
	I1013 21:08:04.853920   24285 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3019,"bootTime":1760386666,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:08:04.853978   24285 start.go:141] virtualization:  
	I1013 21:08:04.857325   24285 out.go:179] * [functional-192425] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:08:04.860182   24285 notify.go:220] Checking for updates...
	I1013 21:08:04.860192   24285 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:08:04.863074   24285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:08:04.866010   24285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:08:04.868814   24285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:08:04.871600   24285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:08:04.874422   24285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:08:04.877745   24285 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:08:04.877830   24285 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:08:04.914017   24285 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:08:04.914118   24285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:08:04.971312   24285 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-13 21:08:04.961772558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:08:04.971402   24285 docker.go:318] overlay module found
	I1013 21:08:04.974593   24285 out.go:179] * Using the docker driver based on existing profile
	I1013 21:08:04.977557   24285 start.go:305] selected driver: docker
	I1013 21:08:04.977565   24285 start.go:925] validating driver "docker" against &{Name:functional-192425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-192425 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:08:04.977668   24285 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:08:04.977779   24285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:08:05.046331   24285 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-13 21:08:05.036801275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:08:05.047133   24285 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:08:05.047161   24285 cni.go:84] Creating CNI manager for ""
	I1013 21:08:05.047220   24285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:08:05.047270   24285 start.go:349] cluster config:
	{Name:functional-192425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-192425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:08:05.052214   24285 out.go:179] * Starting "functional-192425" primary control-plane node in "functional-192425" cluster
	I1013 21:08:05.055050   24285 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:08:05.057879   24285 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 21:08:05.060645   24285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:08:05.060694   24285 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 21:08:05.060701   24285 cache.go:58] Caching tarball of preloaded images
	I1013 21:08:05.060736   24285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 21:08:05.060783   24285 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 21:08:05.060792   24285 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:08:05.060897   24285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/config.json ...
	I1013 21:08:05.077695   24285 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 21:08:05.077706   24285 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 21:08:05.077725   24285 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:08:05.077745   24285 start.go:360] acquireMachinesLock for functional-192425: {Name:mkfcccf7aad2066cfc1a668b911b5c966e6e7a94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:08:05.077809   24285 start.go:364] duration metric: took 47.244µs to acquireMachinesLock for "functional-192425"
	I1013 21:08:05.077827   24285 start.go:96] Skipping create...Using existing machine configuration
	I1013 21:08:05.077840   24285 fix.go:54] fixHost starting: 
	I1013 21:08:05.078093   24285 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
	I1013 21:08:05.095112   24285 fix.go:112] recreateIfNeeded on functional-192425: state=Running err=<nil>
	W1013 21:08:05.095131   24285 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 21:08:05.098366   24285 out.go:252] * Updating the running docker "functional-192425" container ...
	I1013 21:08:05.098403   24285 machine.go:93] provisionDockerMachine start ...
	I1013 21:08:05.098522   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:05.116499   24285 main.go:141] libmachine: Using SSH client type: native
	I1013 21:08:05.116816   24285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1013 21:08:05.116823   24285 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:08:05.271252   24285 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-192425
	
	I1013 21:08:05.271266   24285 ubuntu.go:182] provisioning hostname "functional-192425"
	I1013 21:08:05.271324   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:05.288020   24285 main.go:141] libmachine: Using SSH client type: native
	I1013 21:08:05.288315   24285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1013 21:08:05.288325   24285 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-192425 && echo "functional-192425" | sudo tee /etc/hostname
	I1013 21:08:05.445314   24285 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-192425
	
	I1013 21:08:05.445391   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:05.462570   24285 main.go:141] libmachine: Using SSH client type: native
	I1013 21:08:05.462855   24285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1013 21:08:05.462869   24285 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-192425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-192425/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-192425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:08:05.607957   24285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:08:05.607971   24285 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 21:08:05.607997   24285 ubuntu.go:190] setting up certificates
	I1013 21:08:05.608006   24285 provision.go:84] configureAuth start
	I1013 21:08:05.608094   24285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-192425
	I1013 21:08:05.626132   24285 provision.go:143] copyHostCerts
	I1013 21:08:05.626183   24285 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 21:08:05.626198   24285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:08:05.626272   24285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 21:08:05.626387   24285 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 21:08:05.626391   24285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:08:05.626418   24285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 21:08:05.626474   24285 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 21:08:05.626478   24285 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:08:05.626499   24285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 21:08:05.626551   24285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.functional-192425 san=[127.0.0.1 192.168.49.2 functional-192425 localhost minikube]
	I1013 21:08:06.358074   24285 provision.go:177] copyRemoteCerts
	I1013 21:08:06.358121   24285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:08:06.358158   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:06.374013   24285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:08:06.475521   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 21:08:06.493083   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 21:08:06.512820   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 21:08:06.531353   24285 provision.go:87] duration metric: took 923.325323ms to configureAuth
	I1013 21:08:06.531370   24285 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:08:06.531577   24285 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:08:06.531687   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:06.550908   24285 main.go:141] libmachine: Using SSH client type: native
	I1013 21:08:06.551208   24285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1013 21:08:06.551220   24285 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:08:11.919153   24285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:08:11.919167   24285 machine.go:96] duration metric: took 6.820757836s to provisionDockerMachine
	I1013 21:08:11.919176   24285 start.go:293] postStartSetup for "functional-192425" (driver="docker")
	I1013 21:08:11.919185   24285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:08:11.919258   24285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:08:11.919292   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:11.937283   24285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:08:12.040169   24285 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:08:12.043667   24285 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:08:12.043684   24285 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:08:12.043694   24285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 21:08:12.043750   24285 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 21:08:12.043849   24285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 21:08:12.043923   24285 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/test/nested/copy/4299/hosts -> hosts in /etc/test/nested/copy/4299
	I1013 21:08:12.043969   24285 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4299
	I1013 21:08:12.051815   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:08:12.070224   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/test/nested/copy/4299/hosts --> /etc/test/nested/copy/4299/hosts (40 bytes)
	I1013 21:08:12.088523   24285 start.go:296] duration metric: took 169.333296ms for postStartSetup
	I1013 21:08:12.088595   24285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:08:12.088635   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:12.106029   24285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:08:12.205262   24285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:08:12.210553   24285 fix.go:56] duration metric: took 7.132714716s for fixHost
	I1013 21:08:12.210567   24285 start.go:83] releasing machines lock for "functional-192425", held for 7.132751885s
	I1013 21:08:12.210641   24285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-192425
	I1013 21:08:12.229820   24285 ssh_runner.go:195] Run: cat /version.json
	I1013 21:08:12.229841   24285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:08:12.229867   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:12.229889   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:12.248950   24285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:08:12.262586   24285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:08:12.440697   24285 ssh_runner.go:195] Run: systemctl --version
	I1013 21:08:12.447256   24285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:08:12.483891   24285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:08:12.488481   24285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:08:12.488549   24285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:08:12.496599   24285 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 21:08:12.496611   24285 start.go:495] detecting cgroup driver to use...
	I1013 21:08:12.496640   24285 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 21:08:12.496692   24285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:08:12.512024   24285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:08:12.525823   24285 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:08:12.525881   24285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:08:12.540874   24285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:08:12.553391   24285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:08:12.680390   24285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:08:12.814974   24285 docker.go:234] disabling docker service ...
	I1013 21:08:12.815045   24285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:08:12.831503   24285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:08:12.845119   24285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:08:12.971039   24285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:08:13.098451   24285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:08:13.111698   24285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:08:13.126722   24285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:08:13.126772   24285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:08:13.135403   24285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 21:08:13.135462   24285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:08:13.143901   24285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:08:13.152661   24285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:08:13.161692   24285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:08:13.169220   24285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:08:13.177544   24285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:08:13.185595   24285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:08:13.193525   24285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:08:13.200542   24285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:08:13.207607   24285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:08:13.333880   24285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:08:18.268517   24285 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.93461428s)
	I1013 21:08:18.268532   24285 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:08:18.268580   24285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:08:18.272543   24285 start.go:563] Will wait 60s for crictl version
	I1013 21:08:18.272596   24285 ssh_runner.go:195] Run: which crictl
	I1013 21:08:18.276096   24285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:08:18.303430   24285 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:08:18.303499   24285 ssh_runner.go:195] Run: crio --version
	I1013 21:08:18.330273   24285 ssh_runner.go:195] Run: crio --version
	I1013 21:08:18.361580   24285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:08:18.364499   24285 cli_runner.go:164] Run: docker network inspect functional-192425 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:08:18.380044   24285 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1013 21:08:18.387020   24285 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1013 21:08:18.389794   24285 kubeadm.go:883] updating cluster {Name:functional-192425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-192425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:08:18.389907   24285 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:08:18.389973   24285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:08:18.421658   24285 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:08:18.421670   24285 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:08:18.421733   24285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:08:18.447203   24285 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:08:18.447217   24285 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:08:18.447223   24285 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1013 21:08:18.447320   24285 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-192425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-192425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:08:18.447398   24285 ssh_runner.go:195] Run: crio config
	I1013 21:08:18.508245   24285 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1013 21:08:18.508269   24285 cni.go:84] Creating CNI manager for ""
	I1013 21:08:18.508278   24285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:08:18.508289   24285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:08:18.508309   24285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-192425 NodeName:functional-192425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:08:18.508439   24285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-192425"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:08:18.508515   24285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:08:18.516364   24285 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:08:18.516430   24285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:08:18.523584   24285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 21:08:18.535322   24285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:08:18.547040   24285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1013 21:08:18.559391   24285 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:08:18.562823   24285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:08:18.686111   24285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:08:18.699616   24285 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425 for IP: 192.168.49.2
	I1013 21:08:18.699627   24285 certs.go:195] generating shared ca certs ...
	I1013 21:08:18.699640   24285 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:08:18.699762   24285 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 21:08:18.699826   24285 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 21:08:18.699832   24285 certs.go:257] generating profile certs ...
	I1013 21:08:18.699926   24285 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.key
	I1013 21:08:18.699971   24285 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/apiserver.key.f53426cf
	I1013 21:08:18.700015   24285 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/proxy-client.key
	I1013 21:08:18.700118   24285 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 21:08:18.700152   24285 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 21:08:18.700159   24285 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 21:08:18.700181   24285 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 21:08:18.700201   24285 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:08:18.700227   24285 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 21:08:18.700266   24285 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:08:18.700853   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:08:18.718289   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 21:08:18.735450   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:08:18.753167   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 21:08:18.769462   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 21:08:18.786303   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:08:18.802578   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:08:18.818909   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 21:08:18.835239   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:08:18.855153   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 21:08:18.871513   24285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 21:08:18.887337   24285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:08:18.899570   24285 ssh_runner.go:195] Run: openssl version
	I1013 21:08:18.905759   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:08:18.913882   24285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:08:18.917240   24285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:08:18.917293   24285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:08:18.957745   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:08:18.965863   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 21:08:18.973715   24285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 21:08:18.977376   24285 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 21:08:18.977443   24285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 21:08:19.017902   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 21:08:19.025598   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 21:08:19.033390   24285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 21:08:19.036962   24285 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 21:08:19.037013   24285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 21:08:19.079223   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:08:19.086749   24285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:08:19.090261   24285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 21:08:19.130608   24285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 21:08:19.171248   24285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 21:08:19.213011   24285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 21:08:19.258475   24285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 21:08:19.299370   24285 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 21:08:19.340040   24285 kubeadm.go:400] StartCluster: {Name:functional-192425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-192425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:08:19.340116   24285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:08:19.340181   24285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:08:19.367688   24285 cri.go:89] found id: "056c812596a0c20e32dbf6ac5b9a66ee7b3d6e283364d0e40c581cf11c53cc1e"
	I1013 21:08:19.367698   24285 cri.go:89] found id: "b3e91ed9fdd87dd5e579a56f4f26eb9671b9f8c976c3d639d68eb9063bae1e18"
	I1013 21:08:19.367701   24285 cri.go:89] found id: "d95bf91e9ff358f8949636821ef90f8d792d7188cad30b53f4c544ad0f06dfc5"
	I1013 21:08:19.367704   24285 cri.go:89] found id: "462360293fc54998a4a64128a59da7b62f7c636443876e4faca21edcb302a3bc"
	I1013 21:08:19.367706   24285 cri.go:89] found id: "11f00bec5327064bf506f1e3cea4b1d276be239f4983128d63543fbe29c372c4"
	I1013 21:08:19.367713   24285 cri.go:89] found id: "83c7c18d03c3960ab3ca99d2633249db6dfc9188c92983f7d7f9aa3da8469229"
	I1013 21:08:19.367716   24285 cri.go:89] found id: "02e492ce60107116d8980e3f4a4c64d51b4804700c1dd0c4f222e8666f3a36bc"
	I1013 21:08:19.367718   24285 cri.go:89] found id: "4b70b2fcd0e1da5aeba69f969b1b5b9b81f57716fd08a62025549577be101a68"
	I1013 21:08:19.367720   24285 cri.go:89] found id: "8a1720522f111a69c553bbcb3e4b0b1c75035dab950f875d163a77bfd1bc8555"
	I1013 21:08:19.367726   24285 cri.go:89] found id: "a20022edbbc87776f4f195a07503a31a3e6d087539060e86c0ba9b82514623d4"
	I1013 21:08:19.367728   24285 cri.go:89] found id: "6ae4c21ed1489850fa61b4851abd02dce7b72128b63fa367063f62877aba6b86"
	I1013 21:08:19.367739   24285 cri.go:89] found id: "5b2120043d2b5d2d394dfb5339363fae3b6d19a981850de121885f1c641565c1"
	I1013 21:08:19.367741   24285 cri.go:89] found id: "da8ba60e50b009859f76d3034b8135f695e0b5351729b933b3f2690d658e67f3"
	I1013 21:08:19.367743   24285 cri.go:89] found id: "c2e3a2bdf0414acc3240430f7f7a332d3f123a240de9b0c66ea1ed6e272cb380"
	I1013 21:08:19.367745   24285 cri.go:89] found id: "af88293ed8c68cf4cab5045a4bc4144366775906e8bcd1727767fcc62e9e24d9"
	I1013 21:08:19.367750   24285 cri.go:89] found id: "75c18ebbe974233693f0b7361c1c4f619fde47587656d334b5a306d9878996ef"
	I1013 21:08:19.367752   24285 cri.go:89] found id: ""
	I1013 21:08:19.367820   24285 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 21:08:19.378430   24285 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:08:19Z" level=error msg="open /run/runc: no such file or directory"
	I1013 21:08:19.378494   24285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:08:19.385841   24285 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 21:08:19.385849   24285 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 21:08:19.385896   24285 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 21:08:19.392939   24285 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:08:19.393459   24285 kubeconfig.go:125] found "functional-192425" server: "https://192.168.49.2:8441"
	I1013 21:08:19.394656   24285 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 21:08:19.402317   24285 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-13 21:06:23.403340130 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-13 21:08:18.552577035 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1013 21:08:19.402325   24285 kubeadm.go:1160] stopping kube-system containers ...
	I1013 21:08:19.402336   24285 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1013 21:08:19.402387   24285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:08:19.431291   24285 cri.go:89] found id: "056c812596a0c20e32dbf6ac5b9a66ee7b3d6e283364d0e40c581cf11c53cc1e"
	I1013 21:08:19.431303   24285 cri.go:89] found id: "b3e91ed9fdd87dd5e579a56f4f26eb9671b9f8c976c3d639d68eb9063bae1e18"
	I1013 21:08:19.431306   24285 cri.go:89] found id: "d95bf91e9ff358f8949636821ef90f8d792d7188cad30b53f4c544ad0f06dfc5"
	I1013 21:08:19.431311   24285 cri.go:89] found id: "462360293fc54998a4a64128a59da7b62f7c636443876e4faca21edcb302a3bc"
	I1013 21:08:19.431313   24285 cri.go:89] found id: "11f00bec5327064bf506f1e3cea4b1d276be239f4983128d63543fbe29c372c4"
	I1013 21:08:19.431316   24285 cri.go:89] found id: "83c7c18d03c3960ab3ca99d2633249db6dfc9188c92983f7d7f9aa3da8469229"
	I1013 21:08:19.431319   24285 cri.go:89] found id: "02e492ce60107116d8980e3f4a4c64d51b4804700c1dd0c4f222e8666f3a36bc"
	I1013 21:08:19.431322   24285 cri.go:89] found id: "4b70b2fcd0e1da5aeba69f969b1b5b9b81f57716fd08a62025549577be101a68"
	I1013 21:08:19.431338   24285 cri.go:89] found id: "8a1720522f111a69c553bbcb3e4b0b1c75035dab950f875d163a77bfd1bc8555"
	I1013 21:08:19.431345   24285 cri.go:89] found id: "a20022edbbc87776f4f195a07503a31a3e6d087539060e86c0ba9b82514623d4"
	I1013 21:08:19.431347   24285 cri.go:89] found id: "6ae4c21ed1489850fa61b4851abd02dce7b72128b63fa367063f62877aba6b86"
	I1013 21:08:19.431351   24285 cri.go:89] found id: "5b2120043d2b5d2d394dfb5339363fae3b6d19a981850de121885f1c641565c1"
	I1013 21:08:19.431353   24285 cri.go:89] found id: "da8ba60e50b009859f76d3034b8135f695e0b5351729b933b3f2690d658e67f3"
	I1013 21:08:19.431355   24285 cri.go:89] found id: "c2e3a2bdf0414acc3240430f7f7a332d3f123a240de9b0c66ea1ed6e272cb380"
	I1013 21:08:19.431357   24285 cri.go:89] found id: "af88293ed8c68cf4cab5045a4bc4144366775906e8bcd1727767fcc62e9e24d9"
	I1013 21:08:19.431362   24285 cri.go:89] found id: "75c18ebbe974233693f0b7361c1c4f619fde47587656d334b5a306d9878996ef"
	I1013 21:08:19.431364   24285 cri.go:89] found id: ""
	I1013 21:08:19.431368   24285 cri.go:252] Stopping containers: [056c812596a0c20e32dbf6ac5b9a66ee7b3d6e283364d0e40c581cf11c53cc1e b3e91ed9fdd87dd5e579a56f4f26eb9671b9f8c976c3d639d68eb9063bae1e18 d95bf91e9ff358f8949636821ef90f8d792d7188cad30b53f4c544ad0f06dfc5 462360293fc54998a4a64128a59da7b62f7c636443876e4faca21edcb302a3bc 11f00bec5327064bf506f1e3cea4b1d276be239f4983128d63543fbe29c372c4 83c7c18d03c3960ab3ca99d2633249db6dfc9188c92983f7d7f9aa3da8469229 02e492ce60107116d8980e3f4a4c64d51b4804700c1dd0c4f222e8666f3a36bc 4b70b2fcd0e1da5aeba69f969b1b5b9b81f57716fd08a62025549577be101a68 8a1720522f111a69c553bbcb3e4b0b1c75035dab950f875d163a77bfd1bc8555 a20022edbbc87776f4f195a07503a31a3e6d087539060e86c0ba9b82514623d4 6ae4c21ed1489850fa61b4851abd02dce7b72128b63fa367063f62877aba6b86 5b2120043d2b5d2d394dfb5339363fae3b6d19a981850de121885f1c641565c1 da8ba60e50b009859f76d3034b8135f695e0b5351729b933b3f2690d658e67f3 c2e3a2bdf0414acc3240430f7f7a332d3f123a240de9b0c66ea1ed6e272cb380 af88293ed8c68cf4cab5045a4bc4144366775906e
8bcd1727767fcc62e9e24d9 75c18ebbe974233693f0b7361c1c4f619fde47587656d334b5a306d9878996ef]
	I1013 21:08:19.431424   24285 ssh_runner.go:195] Run: which crictl
	I1013 21:08:19.435040   24285 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 056c812596a0c20e32dbf6ac5b9a66ee7b3d6e283364d0e40c581cf11c53cc1e b3e91ed9fdd87dd5e579a56f4f26eb9671b9f8c976c3d639d68eb9063bae1e18 d95bf91e9ff358f8949636821ef90f8d792d7188cad30b53f4c544ad0f06dfc5 462360293fc54998a4a64128a59da7b62f7c636443876e4faca21edcb302a3bc 11f00bec5327064bf506f1e3cea4b1d276be239f4983128d63543fbe29c372c4 83c7c18d03c3960ab3ca99d2633249db6dfc9188c92983f7d7f9aa3da8469229 02e492ce60107116d8980e3f4a4c64d51b4804700c1dd0c4f222e8666f3a36bc 4b70b2fcd0e1da5aeba69f969b1b5b9b81f57716fd08a62025549577be101a68 8a1720522f111a69c553bbcb3e4b0b1c75035dab950f875d163a77bfd1bc8555 a20022edbbc87776f4f195a07503a31a3e6d087539060e86c0ba9b82514623d4 6ae4c21ed1489850fa61b4851abd02dce7b72128b63fa367063f62877aba6b86 5b2120043d2b5d2d394dfb5339363fae3b6d19a981850de121885f1c641565c1 da8ba60e50b009859f76d3034b8135f695e0b5351729b933b3f2690d658e67f3 c2e3a2bdf0414acc3240430f7f7a332d3f123a240de9b0c66ea1ed6e272cb380 af8829
3ed8c68cf4cab5045a4bc4144366775906e8bcd1727767fcc62e9e24d9 75c18ebbe974233693f0b7361c1c4f619fde47587656d334b5a306d9878996ef
	I1013 21:08:19.531634   24285 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1013 21:08:19.654801   24285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 21:08:19.662372   24285 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 13 21:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 13 21:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 13 21:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 13 21:06 /etc/kubernetes/scheduler.conf
	
	I1013 21:08:19.662430   24285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1013 21:08:19.669621   24285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1013 21:08:19.676843   24285 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:08:19.676900   24285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 21:08:19.683859   24285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1013 21:08:19.691112   24285 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:08:19.691168   24285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 21:08:19.698250   24285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1013 21:08:19.705316   24285 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:08:19.705366   24285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 21:08:19.712079   24285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 21:08:19.719488   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 21:08:19.765451   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 21:08:21.654547   24285 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.889074003s)
	I1013 21:08:21.654605   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1013 21:08:21.876296   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 21:08:21.942779   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1013 21:08:22.013725   24285 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:08:22.013797   24285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:08:22.514959   24285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:08:23.013974   24285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:08:23.024833   24285 api_server.go:72] duration metric: took 1.011119509s to wait for apiserver process to appear ...
	I1013 21:08:23.024846   24285 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:08:23.024863   24285 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 21:08:26.147940   24285 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1013 21:08:26.147954   24285 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1013 21:08:26.147966   24285 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 21:08:26.387208   24285 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 21:08:26.387231   24285 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 21:08:26.525367   24285 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 21:08:26.534910   24285 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 21:08:26.534926   24285 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 21:08:27.025144   24285 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 21:08:27.033635   24285 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 21:08:27.033661   24285 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 21:08:27.524951   24285 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 21:08:27.536619   24285 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1013 21:08:27.554632   24285 api_server.go:141] control plane version: v1.34.1
	I1013 21:08:27.554649   24285 api_server.go:131] duration metric: took 4.529797665s to wait for apiserver health ...
	I1013 21:08:27.554656   24285 cni.go:84] Creating CNI manager for ""
	I1013 21:08:27.554661   24285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:08:27.558229   24285 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 21:08:27.561224   24285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 21:08:27.565357   24285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 21:08:27.565367   24285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 21:08:27.578687   24285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 21:08:28.075328   24285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:08:28.079024   24285 system_pods.go:59] 8 kube-system pods found
	I1013 21:08:28.079053   24285 system_pods.go:61] "coredns-66bc5c9577-nrb79" [a4d236ef-be6a-4bd9-be90-a96cffaf3fa2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:08:28.079061   24285 system_pods.go:61] "etcd-functional-192425" [07e6abc6-c3d8-49fb-84de-dad1b94a0938] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 21:08:28.079070   24285 system_pods.go:61] "kindnet-vjh4c" [c91bc02a-4217-47fe-8fb9-ef09450c162a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 21:08:28.079077   24285 system_pods.go:61] "kube-apiserver-functional-192425" [134c2ece-10f1-4984-9e90-78c9175bfa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 21:08:28.079082   24285 system_pods.go:61] "kube-controller-manager-functional-192425" [7863ddbe-e00c-4b08-a42f-c0c9314d6206] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 21:08:28.079093   24285 system_pods.go:61] "kube-proxy-p24r2" [424c31de-899b-415e-a306-1da562882916] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 21:08:28.079099   24285 system_pods.go:61] "kube-scheduler-functional-192425" [7f67ae82-fbcd-4626-9aa3-138aa74ff9dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 21:08:28.079105   24285 system_pods.go:61] "storage-provisioner" [b81b5df4-be3a-4a60-9ea0-1a280df69068] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 21:08:28.079111   24285 system_pods.go:74] duration metric: took 3.772709ms to wait for pod list to return data ...
	I1013 21:08:28.079120   24285 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:08:28.082010   24285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 21:08:28.082028   24285 node_conditions.go:123] node cpu capacity is 2
	I1013 21:08:28.082041   24285 node_conditions.go:105] duration metric: took 2.915228ms to run NodePressure ...
	I1013 21:08:28.082099   24285 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1013 21:08:28.340313   24285 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1013 21:08:28.343649   24285 kubeadm.go:743] kubelet initialised
	I1013 21:08:28.343659   24285 kubeadm.go:744] duration metric: took 3.334927ms waiting for restarted kubelet to initialise ...
	I1013 21:08:28.343674   24285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 21:08:28.353295   24285 ops.go:34] apiserver oom_adj: -16
	I1013 21:08:28.353307   24285 kubeadm.go:601] duration metric: took 8.967453006s to restartPrimaryControlPlane
	I1013 21:08:28.353314   24285 kubeadm.go:402] duration metric: took 9.013283662s to StartCluster
	I1013 21:08:28.353329   24285 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:08:28.353387   24285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:08:28.354045   24285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:08:28.354248   24285 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:08:28.354509   24285 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:08:28.354548   24285 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 21:08:28.354605   24285 addons.go:69] Setting storage-provisioner=true in profile "functional-192425"
	I1013 21:08:28.354616   24285 addons.go:238] Setting addon storage-provisioner=true in "functional-192425"
	W1013 21:08:28.354621   24285 addons.go:247] addon storage-provisioner should already be in state true
	I1013 21:08:28.354639   24285 host.go:66] Checking if "functional-192425" exists ...
	I1013 21:08:28.355036   24285 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
	I1013 21:08:28.355410   24285 addons.go:69] Setting default-storageclass=true in profile "functional-192425"
	I1013 21:08:28.355431   24285 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-192425"
	I1013 21:08:28.355707   24285 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
	I1013 21:08:28.359246   24285 out.go:179] * Verifying Kubernetes components...
	I1013 21:08:28.362734   24285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:08:28.395532   24285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 21:08:28.398658   24285 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:08:28.398669   24285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 21:08:28.398732   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:28.405418   24285 addons.go:238] Setting addon default-storageclass=true in "functional-192425"
	W1013 21:08:28.405428   24285 addons.go:247] addon default-storageclass should already be in state true
	I1013 21:08:28.405450   24285 host.go:66] Checking if "functional-192425" exists ...
	I1013 21:08:28.405883   24285 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
	I1013 21:08:28.437137   24285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:08:28.439398   24285 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 21:08:28.439409   24285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 21:08:28.439472   24285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:08:28.471758   24285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:08:28.589316   24285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 21:08:28.626530   24285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:08:28.648070   24285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 21:08:29.345528   24285 node_ready.go:35] waiting up to 6m0s for node "functional-192425" to be "Ready" ...
	I1013 21:08:29.347838   24285 node_ready.go:49] node "functional-192425" is "Ready"
	I1013 21:08:29.347851   24285 node_ready.go:38] duration metric: took 2.298374ms for node "functional-192425" to be "Ready" ...
	I1013 21:08:29.347863   24285 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:08:29.347944   24285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:08:29.357729   24285 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 21:08:29.360609   24285 addons.go:514] duration metric: took 1.006048759s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 21:08:29.361553   24285 api_server.go:72] duration metric: took 1.007283404s to wait for apiserver process to appear ...
	I1013 21:08:29.361562   24285 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:08:29.361578   24285 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1013 21:08:29.370880   24285 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1013 21:08:29.371768   24285 api_server.go:141] control plane version: v1.34.1
	I1013 21:08:29.371863   24285 api_server.go:131] duration metric: took 10.212301ms to wait for apiserver health ...
	I1013 21:08:29.371872   24285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:08:29.374557   24285 system_pods.go:59] 8 kube-system pods found
	I1013 21:08:29.374573   24285 system_pods.go:61] "coredns-66bc5c9577-nrb79" [a4d236ef-be6a-4bd9-be90-a96cffaf3fa2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:08:29.374580   24285 system_pods.go:61] "etcd-functional-192425" [07e6abc6-c3d8-49fb-84de-dad1b94a0938] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 21:08:29.374585   24285 system_pods.go:61] "kindnet-vjh4c" [c91bc02a-4217-47fe-8fb9-ef09450c162a] Running
	I1013 21:08:29.374591   24285 system_pods.go:61] "kube-apiserver-functional-192425" [134c2ece-10f1-4984-9e90-78c9175bfa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 21:08:29.374597   24285 system_pods.go:61] "kube-controller-manager-functional-192425" [7863ddbe-e00c-4b08-a42f-c0c9314d6206] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 21:08:29.374601   24285 system_pods.go:61] "kube-proxy-p24r2" [424c31de-899b-415e-a306-1da562882916] Running
	I1013 21:08:29.374608   24285 system_pods.go:61] "kube-scheduler-functional-192425" [7f67ae82-fbcd-4626-9aa3-138aa74ff9dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 21:08:29.374611   24285 system_pods.go:61] "storage-provisioner" [b81b5df4-be3a-4a60-9ea0-1a280df69068] Running
	I1013 21:08:29.374615   24285 system_pods.go:74] duration metric: took 2.739619ms to wait for pod list to return data ...
	I1013 21:08:29.374621   24285 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:08:29.377100   24285 default_sa.go:45] found service account: "default"
	I1013 21:08:29.377110   24285 default_sa.go:55] duration metric: took 2.48488ms for default service account to be created ...
	I1013 21:08:29.377116   24285 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:08:29.380165   24285 system_pods.go:86] 8 kube-system pods found
	I1013 21:08:29.380181   24285 system_pods.go:89] "coredns-66bc5c9577-nrb79" [a4d236ef-be6a-4bd9-be90-a96cffaf3fa2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:08:29.380189   24285 system_pods.go:89] "etcd-functional-192425" [07e6abc6-c3d8-49fb-84de-dad1b94a0938] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 21:08:29.380193   24285 system_pods.go:89] "kindnet-vjh4c" [c91bc02a-4217-47fe-8fb9-ef09450c162a] Running
	I1013 21:08:29.380198   24285 system_pods.go:89] "kube-apiserver-functional-192425" [134c2ece-10f1-4984-9e90-78c9175bfa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 21:08:29.380204   24285 system_pods.go:89] "kube-controller-manager-functional-192425" [7863ddbe-e00c-4b08-a42f-c0c9314d6206] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 21:08:29.380207   24285 system_pods.go:89] "kube-proxy-p24r2" [424c31de-899b-415e-a306-1da562882916] Running
	I1013 21:08:29.380212   24285 system_pods.go:89] "kube-scheduler-functional-192425" [7f67ae82-fbcd-4626-9aa3-138aa74ff9dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 21:08:29.380221   24285 system_pods.go:89] "storage-provisioner" [b81b5df4-be3a-4a60-9ea0-1a280df69068] Running
	I1013 21:08:29.380227   24285 system_pods.go:126] duration metric: took 3.107003ms to wait for k8s-apps to be running ...
	I1013 21:08:29.380233   24285 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:08:29.380285   24285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:08:29.393085   24285 system_svc.go:56] duration metric: took 12.842432ms WaitForService to wait for kubelet
	I1013 21:08:29.393102   24285 kubeadm.go:586] duration metric: took 1.038833942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:08:29.393118   24285 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:08:29.396365   24285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 21:08:29.396380   24285 node_conditions.go:123] node cpu capacity is 2
	I1013 21:08:29.396390   24285 node_conditions.go:105] duration metric: took 3.268ms to run NodePressure ...
	I1013 21:08:29.396401   24285 start.go:241] waiting for startup goroutines ...
	I1013 21:08:29.396408   24285 start.go:246] waiting for cluster config update ...
	I1013 21:08:29.396417   24285 start.go:255] writing updated cluster config ...
	I1013 21:08:29.396706   24285 ssh_runner.go:195] Run: rm -f paused
	I1013 21:08:29.400187   24285 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:08:29.403689   24285 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nrb79" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 21:08:31.409090   24285 pod_ready.go:104] pod "coredns-66bc5c9577-nrb79" is not "Ready", error: <nil>
	W1013 21:08:33.410297   24285 pod_ready.go:104] pod "coredns-66bc5c9577-nrb79" is not "Ready", error: <nil>
	I1013 21:08:34.909682   24285 pod_ready.go:94] pod "coredns-66bc5c9577-nrb79" is "Ready"
	I1013 21:08:34.909696   24285 pod_ready.go:86] duration metric: took 5.505993602s for pod "coredns-66bc5c9577-nrb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:34.912332   24285 pod_ready.go:83] waiting for pod "etcd-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 21:08:36.917864   24285 pod_ready.go:104] pod "etcd-functional-192425" is not "Ready", error: <nil>
	W1013 21:08:38.917893   24285 pod_ready.go:104] pod "etcd-functional-192425" is not "Ready", error: <nil>
	I1013 21:08:40.917423   24285 pod_ready.go:94] pod "etcd-functional-192425" is "Ready"
	I1013 21:08:40.917436   24285 pod_ready.go:86] duration metric: took 6.00509299s for pod "etcd-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:40.919735   24285 pod_ready.go:83] waiting for pod "kube-apiserver-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:40.923914   24285 pod_ready.go:94] pod "kube-apiserver-functional-192425" is "Ready"
	I1013 21:08:40.923927   24285 pod_ready.go:86] duration metric: took 4.179929ms for pod "kube-apiserver-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:40.925916   24285 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:40.929825   24285 pod_ready.go:94] pod "kube-controller-manager-functional-192425" is "Ready"
	I1013 21:08:40.929837   24285 pod_ready.go:86] duration metric: took 3.909379ms for pod "kube-controller-manager-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:40.931825   24285 pod_ready.go:83] waiting for pod "kube-proxy-p24r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:41.115981   24285 pod_ready.go:94] pod "kube-proxy-p24r2" is "Ready"
	I1013 21:08:41.115996   24285 pod_ready.go:86] duration metric: took 184.159265ms for pod "kube-proxy-p24r2" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:41.315943   24285 pod_ready.go:83] waiting for pod "kube-scheduler-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:41.716043   24285 pod_ready.go:94] pod "kube-scheduler-functional-192425" is "Ready"
	I1013 21:08:41.716066   24285 pod_ready.go:86] duration metric: took 400.101588ms for pod "kube-scheduler-functional-192425" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:08:41.716076   24285 pod_ready.go:40] duration metric: took 12.315867881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:08:41.768964   24285 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 21:08:41.771966   24285 out.go:179] * Done! kubectl is now configured to use "functional-192425" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:09:17 functional-192425 crio[3486]: time="2025-10-13T21:09:17.119031692Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-nsptj Namespace:default ID:cff5c3e3d6cb770ed085ad0653f58ddd60c0a3c54f1e22345fa613af15b64f91 UID:96554d0f-4592-4927-9251-8307d634b280 NetNS:/var/run/netns/e09d0d8f-a9bf-4d9e-a3de-72ecbd125684 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40007dfce0}] Aliases:map[]}"
	Oct 13 21:09:17 functional-192425 crio[3486]: time="2025-10-13T21:09:17.119187585Z" level=info msg="Checking pod default_hello-node-75c85bcc94-nsptj for CNI network kindnet (type=ptp)"
	Oct 13 21:09:17 functional-192425 crio[3486]: time="2025-10-13T21:09:17.122792955Z" level=info msg="Ran pod sandbox cff5c3e3d6cb770ed085ad0653f58ddd60c0a3c54f1e22345fa613af15b64f91 with infra container: default/hello-node-75c85bcc94-nsptj/POD" id=3c2f0e75-b6db-4896-b1e5-f1d1866e79ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 21:09:17 functional-192425 crio[3486]: time="2025-10-13T21:09:17.126531918Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ab469c71-2425-450d-930e-df14cf6fc72c name=/runtime.v1.ImageService/PullImage
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.17885355Z" level=info msg="Stopping pod sandbox: 25c51040a6b7bd5e42e728fb6c6de6f690ef5e4ae083d8a083eee35f142f1c53" id=e5b246bd-7233-4e20-be75-c9b1df076a6e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.178939612Z" level=info msg="Stopped pod sandbox (already stopped): 25c51040a6b7bd5e42e728fb6c6de6f690ef5e4ae083d8a083eee35f142f1c53" id=e5b246bd-7233-4e20-be75-c9b1df076a6e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.179828616Z" level=info msg="Removing pod sandbox: 25c51040a6b7bd5e42e728fb6c6de6f690ef5e4ae083d8a083eee35f142f1c53" id=b43ffee2-8097-4575-8c47-64eea555937e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.184067424Z" level=info msg="Removed pod sandbox: 25c51040a6b7bd5e42e728fb6c6de6f690ef5e4ae083d8a083eee35f142f1c53" id=b43ffee2-8097-4575-8c47-64eea555937e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.184654027Z" level=info msg="Stopping pod sandbox: f8ad8e54c83d3b8959d7e2a3584be7795679d3673e58de13078b788efe2d9bfe" id=c2ccdf58-b5d7-41c8-96ea-c7f6b6ff3e2c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.184693411Z" level=info msg="Stopped pod sandbox (already stopped): f8ad8e54c83d3b8959d7e2a3584be7795679d3673e58de13078b788efe2d9bfe" id=c2ccdf58-b5d7-41c8-96ea-c7f6b6ff3e2c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.185173859Z" level=info msg="Removing pod sandbox: f8ad8e54c83d3b8959d7e2a3584be7795679d3673e58de13078b788efe2d9bfe" id=1da8ad45-6228-43df-b11c-0b45c6c471ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.188839223Z" level=info msg="Removed pod sandbox: f8ad8e54c83d3b8959d7e2a3584be7795679d3673e58de13078b788efe2d9bfe" id=1da8ad45-6228-43df-b11c-0b45c6c471ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.192195508Z" level=info msg="Stopping pod sandbox: 9334e84ed8be75c2d5fdc9b7fa1e18bf6f0c228799144cd14a165b77c339dd96" id=f3759494-f84b-4296-ae02-0ae5217df510 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.19226324Z" level=info msg="Stopped pod sandbox (already stopped): 9334e84ed8be75c2d5fdc9b7fa1e18bf6f0c228799144cd14a165b77c339dd96" id=f3759494-f84b-4296-ae02-0ae5217df510 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.193833204Z" level=info msg="Removing pod sandbox: 9334e84ed8be75c2d5fdc9b7fa1e18bf6f0c228799144cd14a165b77c339dd96" id=bf5432e8-2d3d-48f8-b4d3-366bb5f56bf9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:09:22 functional-192425 crio[3486]: time="2025-10-13T21:09:22.197463222Z" level=info msg="Removed pod sandbox: 9334e84ed8be75c2d5fdc9b7fa1e18bf6f0c228799144cd14a165b77c339dd96" id=bf5432e8-2d3d-48f8-b4d3-366bb5f56bf9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 13 21:09:33 functional-192425 crio[3486]: time="2025-10-13T21:09:33.008259256Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bc6f698c-5d86-4ab2-a8e1-9f3bbdab483c name=/runtime.v1.ImageService/PullImage
	Oct 13 21:09:41 functional-192425 crio[3486]: time="2025-10-13T21:09:41.008393033Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=21af29cd-aa9b-49b6-9b8f-9f9ed8598c8f name=/runtime.v1.ImageService/PullImage
	Oct 13 21:10:01 functional-192425 crio[3486]: time="2025-10-13T21:10:01.007338503Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2a558371-b4e2-48df-a5af-0a8c6ec7849e name=/runtime.v1.ImageService/PullImage
	Oct 13 21:10:30 functional-192425 crio[3486]: time="2025-10-13T21:10:30.008412781Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=15cccaa8-7bd6-4d0c-a9ce-cfe9b4ab6597 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:10:54 functional-192425 crio[3486]: time="2025-10-13T21:10:54.015351824Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f839fac0-5055-49bb-9d5a-fbec1c29eab6 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:11:59 functional-192425 crio[3486]: time="2025-10-13T21:11:59.008021519Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b096cde8-b03e-4c5c-8fe4-073f8ef7d18b name=/runtime.v1.ImageService/PullImage
	Oct 13 21:12:21 functional-192425 crio[3486]: time="2025-10-13T21:12:21.007317523Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7721e4a9-3534-4986-96f8-7e5c6f7b619c name=/runtime.v1.ImageService/PullImage
	Oct 13 21:14:45 functional-192425 crio[3486]: time="2025-10-13T21:14:45.007294631Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6021ba4b-05ae-49aa-9fd3-780f8941adb9 name=/runtime.v1.ImageService/PullImage
	Oct 13 21:15:07 functional-192425 crio[3486]: time="2025-10-13T21:15:07.008998483Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b31763ef-3b69-4c6b-98e0-45bfd79a08f2 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b41577a0b202d       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   7230f132026f3       sp-pod                                      default
	ea190900ac572       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   b0e9b091d4215       nginx-svc                                   default
	7f5fab4c4c984       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   0c2bc1291150a       storage-provisioner                         kube-system
	35d9105c2f19e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   d76c0641b9a9d       kube-proxy-p24r2                            kube-system
	241128569a209       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   294063873e301       kindnet-vjh4c                               kube-system
	bbe01b14f0176       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   77384ba975a7c       coredns-66bc5c9577-nrb79                    kube-system
	ff4416f13e22b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   2da70dd25f471       kube-apiserver-functional-192425            kube-system
	29a8feec41d66       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   3c80b7009d31a       kube-controller-manager-functional-192425   kube-system
	fab0520dbbcc4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   5c6c0e386b145       kube-scheduler-functional-192425            kube-system
	9981e560e9cc1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   cef10f8650dce       etcd-functional-192425                      kube-system
	056c812596a0c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   5c6c0e386b145       kube-scheduler-functional-192425            kube-system
	b3e91ed9fdd87       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   0c2bc1291150a       storage-provisioner                         kube-system
	d95bf91e9ff35       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   77384ba975a7c       coredns-66bc5c9577-nrb79                    kube-system
	462360293fc54       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   294063873e301       kindnet-vjh4c                               kube-system
	11f00bec53270       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   d76c0641b9a9d       kube-proxy-p24r2                            kube-system
	83c7c18d03c39       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   3c80b7009d31a       kube-controller-manager-functional-192425   kube-system
	02e492ce60107       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   cef10f8650dce       etcd-functional-192425                      kube-system
	
	
	==> coredns [bbe01b14f017607c8821de4a1af4b31ed942f9e2e49a777c2b53de74d7ecf04d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35761 - 5218 "HINFO IN 7814145264704041871.5893921314458530455. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034411131s
	
	
	==> coredns [d95bf91e9ff358f8949636821ef90f8d792d7188cad30b53f4c544ad0f06dfc5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55588 - 58053 "HINFO IN 3752971454864559002.2415907993512092873. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021871328s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-192425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-192425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=functional-192425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_06_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:06:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-192425
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:18:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:18:37 +0000   Mon, 13 Oct 2025 21:06:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:18:37 +0000   Mon, 13 Oct 2025 21:06:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:18:37 +0000   Mon, 13 Oct 2025 21:06:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:18:37 +0000   Mon, 13 Oct 2025 21:07:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-192425
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc6883c36cdd4be2a3f98e521269c570
	  System UUID:                29958ce2-6fc8-48ee-8f69-875a0057eb6c
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-nsptj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-tth7p          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-nrb79                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-192425                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-vjh4c                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-192425             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-192425    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-p24r2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-192425             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-192425 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-192425 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-192425 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-192425 event: Registered Node functional-192425 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-192425 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-192425 event: Registered Node functional-192425 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-192425 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-192425 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-192425 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-192425 event: Registered Node functional-192425 in Controller
	
	
	==> dmesg <==
	[Oct13 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015096] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497062] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032757] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.728511] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.553238] kauditd_printk_skb: 36 callbacks suppressed
	[Oct13 20:59] overlayfs: idmapped layers are currently not supported
	[  +0.065201] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct13 21:05] overlayfs: idmapped layers are currently not supported
	[Oct13 21:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [02e492ce60107116d8980e3f4a4c64d51b4804700c1dd0c4f222e8666f3a36bc] <==
	{"level":"warn","ts":"2025-10-13T21:07:39.491647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:07:39.508989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:07:39.537110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:07:39.604800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:07:39.607261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:07:39.615357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:07:39.722955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33532","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:08:06.721569Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:08:06.721624Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-192425","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-13T21:08:06.721723Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:08:06.860469Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:08:06.861864Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:08:06.861924Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-13T21:08:06.861908Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:08:06.862028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:08:06.862072Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:08:06.862091Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:08:06.862104Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-13T21:08:06.862072Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:08:06.862016Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-13T21:08:06.861993Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-13T21:08:06.866125Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-13T21:08:06.866217Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:08:06.866261Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-13T21:08:06.866268Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-192425","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [9981e560e9cc13a256fb44fc6351b151a1b9a5385f82929bdcb7deb4536dfcd1] <==
	{"level":"warn","ts":"2025-10-13T21:08:25.203097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.226905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.237709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.253730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.270032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.282797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.297817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.312446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.326217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.342438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.356513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.371316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.387050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.402312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.419075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.433501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.452816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.463872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.495215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.508386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.523837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:08:25.588295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48120","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:18:24.084733Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1090}
	{"level":"info","ts":"2025-10-13T21:18:24.108903Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1090,"took":"23.881352ms","hash":358855389,"current-db-size-bytes":3166208,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1355776,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-13T21:18:24.108958Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":358855389,"revision":1090,"compact-revision":-1}
	
	
	==> kernel <==
	 21:19:02 up  1:01,  0 user,  load average: 0.07, 0.30, 0.49
	Linux functional-192425 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [241128569a209beece92353a45440bed4d3e8b639dbe342e58fb9f4ee1dc0b26] <==
	I1013 21:16:57.719895       1 main.go:301] handling current node
	I1013 21:17:07.714351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:17:07.714383       1 main.go:301] handling current node
	I1013 21:17:17.712089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:17:17.712185       1 main.go:301] handling current node
	I1013 21:17:27.713133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:17:27.713241       1 main.go:301] handling current node
	I1013 21:17:37.712981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:17:37.713015       1 main.go:301] handling current node
	I1013 21:17:47.712858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:17:47.712891       1 main.go:301] handling current node
	I1013 21:17:57.712536       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:17:57.712670       1 main.go:301] handling current node
	I1013 21:18:07.713001       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:18:07.713038       1 main.go:301] handling current node
	I1013 21:18:17.713834       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:18:17.713894       1 main.go:301] handling current node
	I1013 21:18:27.713269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:18:27.713299       1 main.go:301] handling current node
	I1013 21:18:37.712263       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:18:37.712359       1 main.go:301] handling current node
	I1013 21:18:47.712120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:18:47.712152       1 main.go:301] handling current node
	I1013 21:18:57.712639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:18:57.712672       1 main.go:301] handling current node
	
	
	==> kindnet [462360293fc54998a4a64128a59da7b62f7c636443876e4faca21edcb302a3bc] <==
	I1013 21:07:37.506552       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:07:37.506922       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1013 21:07:37.507076       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:07:37.507114       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:07:37.507149       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:07:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:07:37.705018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:07:37.705091       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:07:37.705123       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:07:37.706260       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 21:07:41.206257       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:07:41.206316       1 metrics.go:72] Registering metrics
	I1013 21:07:41.206420       1 controller.go:711] "Syncing nftables rules"
	I1013 21:07:47.705320       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:07:47.705357       1 main.go:301] handling current node
	I1013 21:07:57.705195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1013 21:07:57.705297       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ff4416f13e22ba51f86faa829baa3a23b8230e3a1e1342e64e12afabfd64bd33] <==
	I1013 21:08:26.409371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 21:08:26.410050       1 aggregator.go:171] initial CRD sync complete...
	I1013 21:08:26.410072       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 21:08:26.410078       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 21:08:26.410084       1 cache.go:39] Caches are synced for autoregister controller
	I1013 21:08:26.409810       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1013 21:08:26.414616       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 21:08:26.415300       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 21:08:26.426803       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 21:08:27.000459       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:08:27.121343       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:08:28.067921       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 21:08:28.181315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 21:08:28.319104       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 21:08:28.327236       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 21:08:29.614359       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 21:08:29.863775       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 21:08:30.033181       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 21:08:45.217117       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.50.12"}
	I1013 21:08:51.674899       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.154.56"}
	I1013 21:09:00.660235       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.63.87"}
	E1013 21:09:09.188325       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55570: use of closed network connection
	E1013 21:09:16.642587       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55612: use of closed network connection
	I1013 21:09:16.872660       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.36.153"}
	I1013 21:18:26.318473       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [29a8feec41d667954438f4749d12f7d21280843641eb07a54677e1002d990d9f] <==
	I1013 21:08:29.608174       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 21:08:29.609578       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 21:08:29.611410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:08:29.611705       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:08:29.613594       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:08:29.613707       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 21:08:29.615947       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:08:29.619140       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:08:29.619226       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:08:29.622832       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 21:08:29.626661       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 21:08:29.629904       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 21:08:29.634458       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 21:08:29.637747       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:08:29.640051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:08:29.641088       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 21:08:29.652528       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:08:29.654692       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 21:08:29.655845       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 21:08:29.655899       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 21:08:29.655999       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:08:29.662345       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:08:29.666625       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:08:29.666645       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:08:29.666653       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [83c7c18d03c3960ab3ca99d2633249db6dfc9188c92983f7d7f9aa3da8469229] <==
	I1013 21:07:44.113266       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:07:44.113375       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 21:07:44.119154       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:07:44.142386       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:07:44.151535       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 21:07:44.153793       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 21:07:44.153844       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:07:44.153873       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:07:44.153878       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:07:44.153934       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:07:44.153977       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:07:44.153938       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 21:07:44.154253       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:07:44.154587       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 21:07:44.154872       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 21:07:44.155072       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 21:07:44.155167       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 21:07:44.155349       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:07:44.155664       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 21:07:44.156555       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 21:07:44.161194       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 21:07:44.165088       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:07:44.165444       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 21:07:44.167856       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 21:07:44.175843       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [11f00bec5327064bf506f1e3cea4b1d276be239f4983128d63543fbe29c372c4] <==
	I1013 21:07:37.482398       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:07:38.706515       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:07:41.221347       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:07:41.221387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 21:07:41.221470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:07:41.393064       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:07:41.393120       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:07:41.423073       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:07:41.423417       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:07:41.423435       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:07:41.424869       1 config.go:200] "Starting service config controller"
	I1013 21:07:41.424879       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:07:41.424906       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:07:41.424910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:07:41.424921       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:07:41.424925       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:07:41.425629       1 config.go:309] "Starting node config controller"
	I1013 21:07:41.425637       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:07:41.425649       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:07:41.525286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:07:41.525317       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:07:41.525353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [35d9105c2f19e73040d458a4909f4950faadc4f39d82a567d9b4c2c70a57efa2] <==
	I1013 21:08:27.518022       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:08:27.630012       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:08:27.731857       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:08:27.737737       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1013 21:08:27.737858       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:08:27.759724       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:08:27.759884       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:08:27.763919       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:08:27.764314       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:08:27.764494       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:08:27.765733       1 config.go:200] "Starting service config controller"
	I1013 21:08:27.765786       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:08:27.765825       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:08:27.765851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:08:27.765883       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:08:27.765913       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:08:27.766571       1 config.go:309] "Starting node config controller"
	I1013 21:08:27.766618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:08:27.766646       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:08:27.866507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:08:27.866545       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:08:27.866590       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [056c812596a0c20e32dbf6ac5b9a66ee7b3d6e283364d0e40c581cf11c53cc1e] <==
	I1013 21:07:39.534485       1 serving.go:386] Generated self-signed cert in-memory
	I1013 21:07:41.447238       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:07:41.447277       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:07:41.456564       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:07:41.456676       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 21:07:41.456694       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 21:07:41.456837       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:07:41.464631       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:07:41.464823       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:07:41.464878       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 21:07:41.465010       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 21:07:41.557730       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 21:07:41.565601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:07:41.568870       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 21:08:06.716574       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:08:06.716594       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:08:06.716613       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:08:06.716639       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 21:08:06.716725       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:08:06.716738       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1013 21:08:06.717015       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:08:06.717124       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fab0520dbbcc47805c22543ccb9dc63b70e166cf00c2ebcda1552d25435484fa] <==
	I1013 21:08:23.765596       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:08:26.252138       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:08:26.252245       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:08:26.252282       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:08:26.252325       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:08:26.333930       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:08:26.336789       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:08:26.339534       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:08:26.340125       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:08:26.340204       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:08:26.358384       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:08:26.459770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:16:27 functional-192425 kubelet[3801]: E1013 21:16:27.006922    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:16:31 functional-192425 kubelet[3801]: E1013 21:16:31.006914    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:16:38 functional-192425 kubelet[3801]: E1013 21:16:38.010665    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:16:45 functional-192425 kubelet[3801]: E1013 21:16:45.009758    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:16:52 functional-192425 kubelet[3801]: E1013 21:16:52.008855    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:16:56 functional-192425 kubelet[3801]: E1013 21:16:56.013966    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:17:03 functional-192425 kubelet[3801]: E1013 21:17:03.011632    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:17:09 functional-192425 kubelet[3801]: E1013 21:17:09.006891    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:17:14 functional-192425 kubelet[3801]: E1013 21:17:14.007246    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:17:23 functional-192425 kubelet[3801]: E1013 21:17:23.007134    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:17:28 functional-192425 kubelet[3801]: E1013 21:17:28.008503    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:17:36 functional-192425 kubelet[3801]: E1013 21:17:36.006729    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:17:40 functional-192425 kubelet[3801]: E1013 21:17:40.009815    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:17:47 functional-192425 kubelet[3801]: E1013 21:17:47.007390    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:17:51 functional-192425 kubelet[3801]: E1013 21:17:51.007369    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:18:01 functional-192425 kubelet[3801]: E1013 21:18:01.006709    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:18:03 functional-192425 kubelet[3801]: E1013 21:18:03.007243    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:18:14 functional-192425 kubelet[3801]: E1013 21:18:14.007394    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:18:14 functional-192425 kubelet[3801]: E1013 21:18:14.008122    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:18:28 functional-192425 kubelet[3801]: E1013 21:18:28.007762    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:18:29 functional-192425 kubelet[3801]: E1013 21:18:29.006741    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:18:40 functional-192425 kubelet[3801]: E1013 21:18:40.007336    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	Oct 13 21:18:40 functional-192425 kubelet[3801]: E1013 21:18:40.008067    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:18:51 functional-192425 kubelet[3801]: E1013 21:18:51.007358    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-nsptj" podUID="96554d0f-4592-4927-9251-8307d634b280"
	Oct 13 21:18:53 functional-192425 kubelet[3801]: E1013 21:18:53.007270    3801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tth7p" podUID="46c69a2c-1831-4bb9-83d5-86e2d7f18b2d"
	
	
	==> storage-provisioner [7f5fab4c4c984b04b22c00e9ad277a23231c5277f8d96ab508f9c03f588e19df] <==
	W1013 21:18:37.542776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:39.545321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:39.549538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:41.553333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:41.559684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:43.563006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:43.567327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:45.570049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:45.574464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:47.578893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:47.582796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:49.585316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:49.589374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:51.592284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:51.596465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:53.599338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:53.603759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:55.606374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:55.612921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:57.615775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:57.619999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:59.623448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:18:59.630128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:19:01.633778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:19:01.641406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b3e91ed9fdd87dd5e579a56f4f26eb9671b9f8c976c3d639d68eb9063bae1e18] <==
	I1013 21:07:37.468961       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 21:07:41.187195       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 21:07:41.187330       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 21:07:41.202580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:07:44.675691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:07:48.936075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:07:52.534448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:07:55.587332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:07:58.609265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:07:58.614514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:07:58.614657       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 21:07:58.614820       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-192425_21475037-328c-46bc-b754-373ae7766029!
	I1013 21:07:58.614871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"450a6140-8392-444a-8dc7-8f380815b474", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-192425_21475037-328c-46bc-b754-373ae7766029 became leader
	W1013 21:07:58.622604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:07:58.628077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 21:07:58.715579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-192425_21475037-328c-46bc-b754-373ae7766029!
	W1013 21:08:00.631601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:08:00.639120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:08:02.643398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:08:02.650943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:08:04.653886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:08:04.659097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:08:06.661748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 21:08:06.666338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-192425 -n functional-192425
helpers_test.go:269: (dbg) Run:  kubectl --context functional-192425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-nsptj hello-node-connect-7d85dfc575-tth7p
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-192425 describe pod hello-node-75c85bcc94-nsptj hello-node-connect-7d85dfc575-tth7p
helpers_test.go:290: (dbg) kubectl --context functional-192425 describe pod hello-node-75c85bcc94-nsptj hello-node-connect-7d85dfc575-tth7p:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-nsptj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-192425/192.168.49.2
	Start Time:       Mon, 13 Oct 2025 21:09:16 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xj6v9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xj6v9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-nsptj to functional-192425
	  Normal   Pulling    6m42s (x5 over 9m46s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m42s (x5 over 9m46s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m42s (x5 over 9m46s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m36s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m36s (x21 over 9m46s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-tth7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-192425/192.168.49.2
	Start Time:       Mon, 13 Oct 2025 21:09:00 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7gqqk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7gqqk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-tth7p to functional-192425
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m57s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-192425 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-192425 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-nsptj" [96554d0f-4592-4927-9251-8307d634b280] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1013 21:09:34.687115    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:11:50.818379    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:12:18.528952    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:16:50.818320    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-192425 -n functional-192425
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-13 21:19:17.388165452 +0000 UTC m=+1282.222994630
functional_test.go:1460: (dbg) Run:  kubectl --context functional-192425 describe po hello-node-75c85bcc94-nsptj -n default
functional_test.go:1460: (dbg) kubectl --context functional-192425 describe po hello-node-75c85bcc94-nsptj -n default:
Name:             hello-node-75c85bcc94-nsptj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-192425/192.168.49.2
Start Time:       Mon, 13 Oct 2025 21:09:16 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xj6v9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xj6v9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-nsptj to functional-192425
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-192425 logs hello-node-75c85bcc94-nsptj -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-192425 logs hello-node-75c85bcc94-nsptj -n default: exit status 1 (143.005536ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-nsptj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-192425 logs hello-node-75c85bcc94-nsptj -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 service --namespace=default --https --url hello-node: exit status 115 (608.126976ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32566
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-192425 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 service hello-node --url --format={{.IP}}: exit status 115 (535.902387ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-192425 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 service hello-node --url: exit status 115 (510.66828ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32566
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-192425 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32566
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image load --daemon kicbase/echo-server:functional-192425 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-192425" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image load --daemon kicbase/echo-server:functional-192425 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-192425" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-192425
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image load --daemon kicbase/echo-server:functional-192425 --alsologtostderr
2025/10/13 21:19:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-192425" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image save kicbase/echo-server:functional-192425 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1013 21:19:30.677360   32469 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:19:30.684293   32469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:19:30.684381   32469 out.go:374] Setting ErrFile to fd 2...
	I1013 21:19:30.684404   32469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:19:30.684963   32469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:19:30.686792   32469 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:30.687454   32469 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:30.689551   32469 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
	I1013 21:19:30.743460   32469 ssh_runner.go:195] Run: systemctl --version
	I1013 21:19:30.743546   32469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
	I1013 21:19:30.762362   32469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
	I1013 21:19:30.866072   32469 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1013 21:19:30.866137   32469 cache_images.go:254] Failed to load cached images for "functional-192425": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1013 21:19:30.866161   32469 cache_images.go:266] failed pushing to: functional-192425

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-192425
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image save --daemon kicbase/echo-server:functional-192425 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-192425
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-192425: exit status 1 (20.885288ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-192425

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-192425

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-555478 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-555478 --output=json --user=testUser: exit status 80 (1.754969008s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d2e252ae-07f9-409d-993c-67b5aec2b4ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-555478 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"83ff7797-1a1e-41f6-89f9-618e12ce69ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-13T21:32:22Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c8890ff1-8c46-4f87-be53-552113e61168","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-555478 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-555478 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-555478 --output=json --user=testUser: exit status 80 (1.626903928s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bbc30153-9a83-40e7-bb39-ca73759cb514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-555478 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"03b1510f-4457-42e6-9c5b-dd9734b34e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-13T21:32:24Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6be7b429-5a9b-4ff1-ae0b-379f71dc506f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-555478 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.63s)

                                                
                                    
x
+
TestPause/serial/Pause (6.15s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-609677 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-609677 --alsologtostderr -v=5: exit status 80 (1.798994344s)

                                                
                                                
-- stdout --
	* Pausing node pause-609677 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:54:50.659944  166158 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:54:50.660974  166158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:54:50.660992  166158 out.go:374] Setting ErrFile to fd 2...
	I1013 21:54:50.660998  166158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:54:50.661246  166158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:54:50.661512  166158 out.go:368] Setting JSON to false
	I1013 21:54:50.661537  166158 mustload.go:65] Loading cluster: pause-609677
	I1013 21:54:50.661956  166158 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:54:50.662399  166158 cli_runner.go:164] Run: docker container inspect pause-609677 --format={{.State.Status}}
	I1013 21:54:50.678827  166158 host.go:66] Checking if "pause-609677" exists ...
	I1013 21:54:50.679127  166158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:54:50.735731  166158 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 21:54:50.725871022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:54:50.736502  166158 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-609677 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 21:54:50.739757  166158 out.go:179] * Pausing node pause-609677 ... 
	I1013 21:54:50.743390  166158 host.go:66] Checking if "pause-609677" exists ...
	I1013 21:54:50.743775  166158 ssh_runner.go:195] Run: systemctl --version
	I1013 21:54:50.743910  166158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:50.760750  166158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:50.862393  166158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:54:50.875202  166158 pause.go:52] kubelet running: true
	I1013 21:54:50.875283  166158 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 21:54:51.087061  166158 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 21:54:51.087218  166158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 21:54:51.160881  166158 cri.go:89] found id: "e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7"
	I1013 21:54:51.160904  166158 cri.go:89] found id: "23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675"
	I1013 21:54:51.160910  166158 cri.go:89] found id: "e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6"
	I1013 21:54:51.160914  166158 cri.go:89] found id: "37fc253c25c3b33e95033f66d59c43bec2cc24937ab8865d0b893a1ca44bd78b"
	I1013 21:54:51.160918  166158 cri.go:89] found id: "1dd85a64ce20b9dcdee25927856a7136cc2ca59b9128254415532367203a522a"
	I1013 21:54:51.160921  166158 cri.go:89] found id: "fc8e2ea74687b306f0b75c55f4977da5505971a5a74789317b3fab98a5e92f03"
	I1013 21:54:51.160925  166158 cri.go:89] found id: "b779d4fe4cba319c08ada9835653c4429eb4ab3cfcac3fd8b6f5055e1b826f3d"
	I1013 21:54:51.160928  166158 cri.go:89] found id: "20c5aff0e9704a0e9cf80f1bc3097b3adcc89c040fb42664c031162cc8af3eee"
	I1013 21:54:51.160932  166158 cri.go:89] found id: "e633956403c8d2bcdad3ed466b8fd307d41d45bfbb6d078f6e5f72f0c194d2ee"
	I1013 21:54:51.160939  166158 cri.go:89] found id: "f07f3c1fea64da214073a546691f51cbeed5401814bdfecf8c5c2d7d965b76dc"
	I1013 21:54:51.160943  166158 cri.go:89] found id: "cb21928f370feb3f97bfbce9a8d34340e56cbc857eadc6accb9f1d851d0886c3"
	I1013 21:54:51.160946  166158 cri.go:89] found id: "da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035"
	I1013 21:54:51.160950  166158 cri.go:89] found id: "082f903b4adc7c39f642e460f3161aa4ba0f568fdd713a3bde9e5748752b5eb7"
	I1013 21:54:51.160953  166158 cri.go:89] found id: "c60258466c3d1703891e0584fffda2246e17fe642e25b99afaf0cb9e26934b79"
	I1013 21:54:51.160957  166158 cri.go:89] found id: ""
	I1013 21:54:51.161008  166158 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:54:51.172472  166158 retry.go:31] will retry after 208.629537ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:54:51Z" level=error msg="open /run/runc: no such file or directory"
	I1013 21:54:51.381948  166158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:54:51.394973  166158 pause.go:52] kubelet running: false
	I1013 21:54:51.395060  166158 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 21:54:51.528590  166158 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 21:54:51.528698  166158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 21:54:51.594617  166158 cri.go:89] found id: "e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7"
	I1013 21:54:51.594645  166158 cri.go:89] found id: "23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675"
	I1013 21:54:51.594660  166158 cri.go:89] found id: "e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6"
	I1013 21:54:51.594664  166158 cri.go:89] found id: "37fc253c25c3b33e95033f66d59c43bec2cc24937ab8865d0b893a1ca44bd78b"
	I1013 21:54:51.594667  166158 cri.go:89] found id: "1dd85a64ce20b9dcdee25927856a7136cc2ca59b9128254415532367203a522a"
	I1013 21:54:51.594671  166158 cri.go:89] found id: "fc8e2ea74687b306f0b75c55f4977da5505971a5a74789317b3fab98a5e92f03"
	I1013 21:54:51.594674  166158 cri.go:89] found id: "b779d4fe4cba319c08ada9835653c4429eb4ab3cfcac3fd8b6f5055e1b826f3d"
	I1013 21:54:51.594676  166158 cri.go:89] found id: "20c5aff0e9704a0e9cf80f1bc3097b3adcc89c040fb42664c031162cc8af3eee"
	I1013 21:54:51.594679  166158 cri.go:89] found id: "e633956403c8d2bcdad3ed466b8fd307d41d45bfbb6d078f6e5f72f0c194d2ee"
	I1013 21:54:51.594711  166158 cri.go:89] found id: "f07f3c1fea64da214073a546691f51cbeed5401814bdfecf8c5c2d7d965b76dc"
	I1013 21:54:51.594719  166158 cri.go:89] found id: "cb21928f370feb3f97bfbce9a8d34340e56cbc857eadc6accb9f1d851d0886c3"
	I1013 21:54:51.594723  166158 cri.go:89] found id: "da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035"
	I1013 21:54:51.594726  166158 cri.go:89] found id: "082f903b4adc7c39f642e460f3161aa4ba0f568fdd713a3bde9e5748752b5eb7"
	I1013 21:54:51.594729  166158 cri.go:89] found id: "c60258466c3d1703891e0584fffda2246e17fe642e25b99afaf0cb9e26934b79"
	I1013 21:54:51.594732  166158 cri.go:89] found id: ""
	I1013 21:54:51.594796  166158 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:54:51.605144  166158 retry.go:31] will retry after 492.156993ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:54:51Z" level=error msg="open /run/runc: no such file or directory"
	I1013 21:54:52.097451  166158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:54:52.110757  166158 pause.go:52] kubelet running: false
	I1013 21:54:52.110866  166158 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 21:54:52.253823  166158 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 21:54:52.253937  166158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 21:54:52.320316  166158 cri.go:89] found id: "e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7"
	I1013 21:54:52.320388  166158 cri.go:89] found id: "23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675"
	I1013 21:54:52.320400  166158 cri.go:89] found id: "e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6"
	I1013 21:54:52.320405  166158 cri.go:89] found id: "37fc253c25c3b33e95033f66d59c43bec2cc24937ab8865d0b893a1ca44bd78b"
	I1013 21:54:52.320408  166158 cri.go:89] found id: "1dd85a64ce20b9dcdee25927856a7136cc2ca59b9128254415532367203a522a"
	I1013 21:54:52.320412  166158 cri.go:89] found id: "fc8e2ea74687b306f0b75c55f4977da5505971a5a74789317b3fab98a5e92f03"
	I1013 21:54:52.320415  166158 cri.go:89] found id: "b779d4fe4cba319c08ada9835653c4429eb4ab3cfcac3fd8b6f5055e1b826f3d"
	I1013 21:54:52.320418  166158 cri.go:89] found id: "20c5aff0e9704a0e9cf80f1bc3097b3adcc89c040fb42664c031162cc8af3eee"
	I1013 21:54:52.320421  166158 cri.go:89] found id: "e633956403c8d2bcdad3ed466b8fd307d41d45bfbb6d078f6e5f72f0c194d2ee"
	I1013 21:54:52.320427  166158 cri.go:89] found id: "f07f3c1fea64da214073a546691f51cbeed5401814bdfecf8c5c2d7d965b76dc"
	I1013 21:54:52.320431  166158 cri.go:89] found id: "cb21928f370feb3f97bfbce9a8d34340e56cbc857eadc6accb9f1d851d0886c3"
	I1013 21:54:52.320434  166158 cri.go:89] found id: "da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035"
	I1013 21:54:52.320437  166158 cri.go:89] found id: "082f903b4adc7c39f642e460f3161aa4ba0f568fdd713a3bde9e5748752b5eb7"
	I1013 21:54:52.320443  166158 cri.go:89] found id: "c60258466c3d1703891e0584fffda2246e17fe642e25b99afaf0cb9e26934b79"
	I1013 21:54:52.320462  166158 cri.go:89] found id: ""
	I1013 21:54:52.320529  166158 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 21:54:52.334403  166158 out.go:203] 
	W1013 21:54:52.337296  166158 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:54:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:54:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 21:54:52.337315  166158 out.go:285] * 
	* 
	W1013 21:54:52.398548  166158 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 21:54:52.401691  166158 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-609677 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-609677
helpers_test.go:243: (dbg) docker inspect pause-609677:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6",
	        "Created": "2025-10-13T21:53:06.840898932Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159570,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:53:06.909810941Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6-json.log",
	        "Name": "/pause-609677",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-609677:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-609677",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6",
	                "LowerDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-609677",
	                "Source": "/var/lib/docker/volumes/pause-609677/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-609677",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-609677",
	                "name.minikube.sigs.k8s.io": "pause-609677",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eeda4ca35523fbf936044f54f396a57e4f04bb2b1e984f2fba454cbbaed4c2f",
	            "SandboxKey": "/var/run/docker/netns/6eeda4ca3552",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-609677": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:b4:7e:be:7a:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74a66ab38d5c3574cbcbdbc75813b7741e9ba5eaed628c4754810bd95dd597b9",
	                    "EndpointID": "156119fd85cd461e917b6f70f91f3dff8b99f392ae98e8a0da12ef0e5cc97a01",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-609677",
	                        "b1a494f13009"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-609677 -n pause-609677
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-609677 -n pause-609677: exit status 2 (321.125463ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-609677 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-609677 logs -n 25: (1.334748031s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-585265 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │ 13 Oct 25 21:49 UTC │
	│ ssh     │ -p NoKubernetes-585265 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │                     │
	│ stop    │ -p NoKubernetes-585265                                                                                                                   │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │ 13 Oct 25 21:49 UTC │
	│ start   │ -p NoKubernetes-585265 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p missing-upgrade-403510 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-403510    │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ ssh     │ -p NoKubernetes-585265 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │                     │
	│ delete  │ -p NoKubernetes-585265                                                                                                                   │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ stop    │ -p kubernetes-upgrade-304765                                                                                                             │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:53 UTC │
	│ delete  │ -p missing-upgrade-403510                                                                                                                │ missing-upgrade-403510    │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p stopped-upgrade-014468 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-014468    │ jenkins │ v1.32.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:51 UTC │
	│ stop    │ stopped-upgrade-014468 stop                                                                                                              │ stopped-upgrade-014468    │ jenkins │ v1.32.0 │ 13 Oct 25 21:51 UTC │ 13 Oct 25 21:51 UTC │
	│ start   │ -p stopped-upgrade-014468 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-014468    │ jenkins │ v1.37.0 │ 13 Oct 25 21:51 UTC │ 13 Oct 25 21:51 UTC │
	│ delete  │ -p stopped-upgrade-014468                                                                                                                │ stopped-upgrade-014468    │ jenkins │ v1.37.0 │ 13 Oct 25 21:51 UTC │ 13 Oct 25 21:52 UTC │
	│ start   │ -p running-upgrade-601721 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-601721    │ jenkins │ v1.32.0 │ 13 Oct 25 21:52 UTC │ 13 Oct 25 21:52 UTC │
	│ start   │ -p running-upgrade-601721 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-601721    │ jenkins │ v1.37.0 │ 13 Oct 25 21:52 UTC │ 13 Oct 25 21:52 UTC │
	│ delete  │ -p running-upgrade-601721                                                                                                                │ running-upgrade-601721    │ jenkins │ v1.37.0 │ 13 Oct 25 21:52 UTC │ 13 Oct 25 21:53 UTC │
	│ start   │ -p pause-609677 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-609677              │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │ 13 Oct 25 21:54 UTC │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │ 13 Oct 25 21:53 UTC │
	│ delete  │ -p kubernetes-upgrade-304765                                                                                                             │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │ 13 Oct 25 21:53 UTC │
	│ start   │ -p force-systemd-flag-257205 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │                     │
	│ start   │ -p pause-609677 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-609677              │ jenkins │ v1.37.0 │ 13 Oct 25 21:54 UTC │ 13 Oct 25 21:54 UTC │
	│ pause   │ -p pause-609677 --alsologtostderr -v=5                                                                                                   │ pause-609677              │ jenkins │ v1.37.0 │ 13 Oct 25 21:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:54:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:54:23.746379  165023 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:54:23.746559  165023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:54:23.746570  165023 out.go:374] Setting ErrFile to fd 2...
	I1013 21:54:23.746575  165023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:54:23.746869  165023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:54:23.747257  165023 out.go:368] Setting JSON to false
	I1013 21:54:23.748282  165023 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5798,"bootTime":1760386666,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:54:23.748350  165023 start.go:141] virtualization:  
	I1013 21:54:23.753335  165023 out.go:179] * [pause-609677] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:54:23.756276  165023 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:54:23.756382  165023 notify.go:220] Checking for updates...
	I1013 21:54:23.762168  165023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:54:23.765099  165023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:54:23.768143  165023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:54:23.770962  165023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:54:23.773877  165023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:54:23.777270  165023 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:54:23.777810  165023 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:54:23.809288  165023 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:54:23.809401  165023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:54:23.879517  165023 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 21:54:23.866083137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:54:23.879654  165023 docker.go:318] overlay module found
	I1013 21:54:23.882796  165023 out.go:179] * Using the docker driver based on existing profile
	I1013 21:54:23.885573  165023 start.go:305] selected driver: docker
	I1013 21:54:23.885590  165023 start.go:925] validating driver "docker" against &{Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:54:23.885730  165023 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:54:23.885829  165023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:54:23.940215  165023 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 21:54:23.93120903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:54:23.940975  165023 cni.go:84] Creating CNI manager for ""
	I1013 21:54:23.941043  165023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:54:23.941094  165023 start.go:349] cluster config:
	{Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:54:23.944302  165023 out.go:179] * Starting "pause-609677" primary control-plane node in "pause-609677" cluster
	I1013 21:54:23.947082  165023 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:54:23.949971  165023 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 21:54:23.952746  165023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:54:23.952793  165023 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 21:54:23.952805  165023 cache.go:58] Caching tarball of preloaded images
	I1013 21:54:23.952819  165023 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 21:54:23.952881  165023 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 21:54:23.952891  165023 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:54:23.953036  165023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/config.json ...
	I1013 21:54:23.972530  165023 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 21:54:23.972553  165023 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 21:54:23.972578  165023 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:54:23.972600  165023 start.go:360] acquireMachinesLock for pause-609677: {Name:mkeef98324cfd0451e87c760720ad13d14880639 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:54:23.972669  165023 start.go:364] duration metric: took 42.468µs to acquireMachinesLock for "pause-609677"
	I1013 21:54:23.972691  165023 start.go:96] Skipping create...Using existing machine configuration
	I1013 21:54:23.972704  165023 fix.go:54] fixHost starting: 
	I1013 21:54:23.972973  165023 cli_runner.go:164] Run: docker container inspect pause-609677 --format={{.State.Status}}
	I1013 21:54:23.988906  165023 fix.go:112] recreateIfNeeded on pause-609677: state=Running err=<nil>
	W1013 21:54:23.988933  165023 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 21:54:23.992214  165023 out.go:252] * Updating the running docker "pause-609677" container ...
	I1013 21:54:23.992247  165023 machine.go:93] provisionDockerMachine start ...
	I1013 21:54:23.992321  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:24.012014  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:24.012357  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:24.012373  165023 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:54:24.159162  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-609677
	
	I1013 21:54:24.159182  165023 ubuntu.go:182] provisioning hostname "pause-609677"
	I1013 21:54:24.159240  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:24.176671  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:24.176985  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:24.177000  165023 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-609677 && echo "pause-609677" | sudo tee /etc/hostname
	I1013 21:54:24.332820  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-609677
	
	I1013 21:54:24.332904  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:24.350760  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:24.351068  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:24.351089  165023 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-609677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-609677/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-609677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:54:24.499869  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:54:24.499899  165023 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 21:54:24.499953  165023 ubuntu.go:190] setting up certificates
	I1013 21:54:24.499970  165023 provision.go:84] configureAuth start
	I1013 21:54:24.500046  165023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-609677
	I1013 21:54:24.516842  165023 provision.go:143] copyHostCerts
	I1013 21:54:24.516912  165023 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 21:54:24.516934  165023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:54:24.517018  165023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 21:54:24.517125  165023 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 21:54:24.517137  165023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:54:24.517166  165023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 21:54:24.517225  165023 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 21:54:24.517233  165023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:54:24.517258  165023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 21:54:24.517314  165023 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.pause-609677 san=[127.0.0.1 192.168.85.2 localhost minikube pause-609677]
	I1013 21:54:25.668857  165023 provision.go:177] copyRemoteCerts
	I1013 21:54:25.668927  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:54:25.668976  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:25.685908  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:25.791850  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 21:54:25.809320  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 21:54:25.827382  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 21:54:25.845429  165023 provision.go:87] duration metric: took 1.345436143s to configureAuth
	I1013 21:54:25.845507  165023 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:54:25.845757  165023 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:54:25.845867  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:25.863319  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:25.863652  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:25.863672  165023 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:54:31.314975  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:54:31.315003  165023 machine.go:96] duration metric: took 7.322746857s to provisionDockerMachine
	I1013 21:54:31.315016  165023 start.go:293] postStartSetup for "pause-609677" (driver="docker")
	I1013 21:54:31.315027  165023 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:54:31.315092  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:54:31.315151  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.334676  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.435585  165023 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:54:31.438987  165023 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:54:31.439013  165023 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:54:31.439024  165023 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 21:54:31.439078  165023 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 21:54:31.439157  165023 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 21:54:31.439263  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 21:54:31.446694  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:54:31.463718  165023 start.go:296] duration metric: took 148.686132ms for postStartSetup
	I1013 21:54:31.463851  165023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:54:31.463896  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.481289  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.580915  165023 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:54:31.585601  165023 fix.go:56] duration metric: took 7.612893567s for fixHost
	I1013 21:54:31.585622  165023 start.go:83] releasing machines lock for "pause-609677", held for 7.612941722s
	I1013 21:54:31.585685  165023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-609677
	I1013 21:54:31.601925  165023 ssh_runner.go:195] Run: cat /version.json
	I1013 21:54:31.601973  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.601998  165023 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:54:31.602061  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.629296  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.630257  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.747589  165023 ssh_runner.go:195] Run: systemctl --version
	I1013 21:54:31.841575  165023 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:54:31.881980  165023 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:54:31.886831  165023 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:54:31.886950  165023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:54:31.895296  165023 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 21:54:31.895321  165023 start.go:495] detecting cgroup driver to use...
	I1013 21:54:31.895353  165023 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 21:54:31.895400  165023 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:54:31.910812  165023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:54:31.924011  165023 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:54:31.924082  165023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:54:31.939760  165023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:54:31.952915  165023 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:54:32.088919  165023 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:54:32.228031  165023 docker.go:234] disabling docker service ...
	I1013 21:54:32.228105  165023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:54:32.249778  165023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:54:32.263755  165023 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:54:32.404279  165023 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:54:32.551291  165023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:54:32.564170  165023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:54:32.578597  165023 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:54:32.578731  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.587931  165023 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 21:54:32.588046  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.597184  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.606259  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.615096  165023 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:54:32.623755  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.633316  165023 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.642050  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.655195  165023 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:54:32.666111  165023 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:54:32.674767  165023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:54:32.818269  165023 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:54:32.965107  165023 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:54:32.965233  165023 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:54:32.969050  165023 start.go:563] Will wait 60s for crictl version
	I1013 21:54:32.969116  165023 ssh_runner.go:195] Run: which crictl
	I1013 21:54:32.972525  165023 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:54:33.002933  165023 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:54:33.003109  165023 ssh_runner.go:195] Run: crio --version
	I1013 21:54:33.037712  165023 ssh_runner.go:195] Run: crio --version
	I1013 21:54:33.069332  165023 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:54:33.070477  165023 cli_runner.go:164] Run: docker network inspect pause-609677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:54:33.086048  165023 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 21:54:33.090076  165023 kubeadm.go:883] updating cluster {Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:54:33.090223  165023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:54:33.090287  165023 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:54:33.122239  165023 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:54:33.122263  165023 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:54:33.122333  165023 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:54:33.148761  165023 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:54:33.148792  165023 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:54:33.148801  165023 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 21:54:33.148912  165023 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-609677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:54:33.148996  165023 ssh_runner.go:195] Run: crio config
	I1013 21:54:33.223327  165023 cni.go:84] Creating CNI manager for ""
	I1013 21:54:33.223396  165023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:54:33.223423  165023 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:54:33.223500  165023 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-609677 NodeName:pause-609677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:54:33.223678  165023 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-609677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:54:33.223808  165023 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:54:33.231658  165023 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:54:33.231752  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:54:33.239389  165023 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1013 21:54:33.252310  165023 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:54:33.266932  165023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 21:54:33.280069  165023 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:54:33.283695  165023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:54:33.421047  165023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:54:33.436332  165023 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677 for IP: 192.168.85.2
	I1013 21:54:33.436395  165023 certs.go:195] generating shared ca certs ...
	I1013 21:54:33.436423  165023 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:54:33.436583  165023 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 21:54:33.436662  165023 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 21:54:33.436707  165023 certs.go:257] generating profile certs ...
	I1013 21:54:33.436836  165023 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.key
	I1013 21:54:33.436942  165023 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/apiserver.key.e0590797
	I1013 21:54:33.437009  165023 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/proxy-client.key
	I1013 21:54:33.437152  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 21:54:33.437208  165023 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 21:54:33.437230  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 21:54:33.437288  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 21:54:33.437335  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:54:33.437373  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 21:54:33.437447  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:54:33.438050  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:54:33.456940  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 21:54:33.476234  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:54:33.495304  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 21:54:33.515262  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 21:54:33.535686  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:54:33.555182  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:54:33.574041  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 21:54:33.591195  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:54:33.609994  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 21:54:33.627418  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 21:54:33.644805  165023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:54:33.657769  165023 ssh_runner.go:195] Run: openssl version
	I1013 21:54:33.664359  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:54:33.675337  165023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:54:33.679142  165023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:54:33.679204  165023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:54:33.720535  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:54:33.728027  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 21:54:33.735579  165023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 21:54:33.739001  165023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 21:54:33.739058  165023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 21:54:33.779574  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 21:54:33.787008  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 21:54:33.794477  165023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 21:54:33.797781  165023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 21:54:33.797840  165023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 21:54:33.838478  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:54:33.845965  165023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:54:33.849386  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 21:54:33.889777  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 21:54:33.931020  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 21:54:33.972493  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 21:54:34.015199  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 21:54:34.056891  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 21:54:34.098551  165023 kubeadm.go:400] StartCluster: {Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:54:34.098670  165023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:54:34.098732  165023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:54:34.131819  165023 cri.go:89] found id: "20c5aff0e9704a0e9cf80f1bc3097b3adcc89c040fb42664c031162cc8af3eee"
	I1013 21:54:34.131840  165023 cri.go:89] found id: "e633956403c8d2bcdad3ed466b8fd307d41d45bfbb6d078f6e5f72f0c194d2ee"
	I1013 21:54:34.131844  165023 cri.go:89] found id: "f07f3c1fea64da214073a546691f51cbeed5401814bdfecf8c5c2d7d965b76dc"
	I1013 21:54:34.131848  165023 cri.go:89] found id: "cb21928f370feb3f97bfbce9a8d34340e56cbc857eadc6accb9f1d851d0886c3"
	I1013 21:54:34.131851  165023 cri.go:89] found id: "da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035"
	I1013 21:54:34.131854  165023 cri.go:89] found id: "082f903b4adc7c39f642e460f3161aa4ba0f568fdd713a3bde9e5748752b5eb7"
	I1013 21:54:34.131857  165023 cri.go:89] found id: "c60258466c3d1703891e0584fffda2246e17fe642e25b99afaf0cb9e26934b79"
	I1013 21:54:34.131860  165023 cri.go:89] found id: ""
	I1013 21:54:34.131909  165023 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 21:54:34.142394  165023 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:54:34Z" level=error msg="open /run/runc: no such file or directory"
	I1013 21:54:34.142485  165023 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:54:34.150046  165023 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 21:54:34.150066  165023 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 21:54:34.150145  165023 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 21:54:34.157296  165023 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:54:34.157841  165023 kubeconfig.go:125] found "pause-609677" server: "https://192.168.85.2:8443"
	I1013 21:54:34.158409  165023 kapi.go:59] client config for pause-609677: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.key", CAFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120110), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 21:54:34.158867  165023 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1013 21:54:34.158888  165023 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1013 21:54:34.158893  165023 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1013 21:54:34.158898  165023 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1013 21:54:34.158903  165023 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1013 21:54:34.159183  165023 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 21:54:34.166859  165023 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 21:54:34.166893  165023 kubeadm.go:601] duration metric: took 16.820973ms to restartPrimaryControlPlane
	I1013 21:54:34.166902  165023 kubeadm.go:402] duration metric: took 68.361154ms to StartCluster
	I1013 21:54:34.166949  165023 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:54:34.167037  165023 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:54:34.167677  165023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:54:34.168003  165023 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:54:34.168288  165023 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:54:34.168356  165023 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 21:54:34.169365  165023 out.go:179] * Verifying Kubernetes components...
	I1013 21:54:34.170007  165023 out.go:179] * Enabled addons: 
	I1013 21:54:34.170840  165023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:54:34.171446  165023 addons.go:514] duration metric: took 3.081444ms for enable addons: enabled=[]
	I1013 21:54:34.312030  165023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:54:34.325738  165023 node_ready.go:35] waiting up to 6m0s for node "pause-609677" to be "Ready" ...
	I1013 21:54:38.037753  165023 node_ready.go:49] node "pause-609677" is "Ready"
	I1013 21:54:38.037781  165023 node_ready.go:38] duration metric: took 3.712008029s for node "pause-609677" to be "Ready" ...
	I1013 21:54:38.037794  165023 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:54:38.037851  165023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:54:38.061550  165023 api_server.go:72] duration metric: took 3.89351116s to wait for apiserver process to appear ...
	I1013 21:54:38.061574  165023 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:54:38.061629  165023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 21:54:38.184114  165023 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 21:54:38.184183  165023 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 21:54:38.561670  165023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 21:54:38.570410  165023 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 21:54:38.570438  165023 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 21:54:39.062275  165023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 21:54:39.070721  165023 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 21:54:39.071857  165023 api_server.go:141] control plane version: v1.34.1
	I1013 21:54:39.071884  165023 api_server.go:131] duration metric: took 1.010280481s to wait for apiserver health ...
	I1013 21:54:39.071895  165023 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:54:39.075535  165023 system_pods.go:59] 7 kube-system pods found
	I1013 21:54:39.075573  165023 system_pods.go:61] "coredns-66bc5c9577-9hxkk" [22b9d94d-f872-48ad-a5fa-77a5bd5186d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:54:39.075584  165023 system_pods.go:61] "etcd-pause-609677" [40e77bdb-3010-4ec4-8431-0bb665621837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 21:54:39.075590  165023 system_pods.go:61] "kindnet-gbt7d" [41ae05d3-8177-4a09-8617-d9c26c154582] Running
	I1013 21:54:39.075596  165023 system_pods.go:61] "kube-apiserver-pause-609677" [1b8b1ce4-baf2-4e1e-b1cb-ceafdc25add4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 21:54:39.075604  165023 system_pods.go:61] "kube-controller-manager-pause-609677" [ab49c9ae-c703-47c0-a878-d45b799d6592] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 21:54:39.075614  165023 system_pods.go:61] "kube-proxy-6zl75" [ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e] Running
	I1013 21:54:39.075622  165023 system_pods.go:61] "kube-scheduler-pause-609677" [1f6f9b94-01cc-421d-b461-d5cdf4c7dd42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 21:54:39.075630  165023 system_pods.go:74] duration metric: took 3.727818ms to wait for pod list to return data ...
	I1013 21:54:39.075650  165023 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:54:39.078230  165023 default_sa.go:45] found service account: "default"
	I1013 21:54:39.078257  165023 default_sa.go:55] duration metric: took 2.600389ms for default service account to be created ...
	I1013 21:54:39.078267  165023 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:54:39.081114  165023 system_pods.go:86] 7 kube-system pods found
	I1013 21:54:39.081161  165023 system_pods.go:89] "coredns-66bc5c9577-9hxkk" [22b9d94d-f872-48ad-a5fa-77a5bd5186d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:54:39.081208  165023 system_pods.go:89] "etcd-pause-609677" [40e77bdb-3010-4ec4-8431-0bb665621837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 21:54:39.081228  165023 system_pods.go:89] "kindnet-gbt7d" [41ae05d3-8177-4a09-8617-d9c26c154582] Running
	I1013 21:54:39.081244  165023 system_pods.go:89] "kube-apiserver-pause-609677" [1b8b1ce4-baf2-4e1e-b1cb-ceafdc25add4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 21:54:39.081254  165023 system_pods.go:89] "kube-controller-manager-pause-609677" [ab49c9ae-c703-47c0-a878-d45b799d6592] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 21:54:39.081282  165023 system_pods.go:89] "kube-proxy-6zl75" [ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e] Running
	I1013 21:54:39.081309  165023 system_pods.go:89] "kube-scheduler-pause-609677" [1f6f9b94-01cc-421d-b461-d5cdf4c7dd42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 21:54:39.081324  165023 system_pods.go:126] duration metric: took 3.051061ms to wait for k8s-apps to be running ...
	I1013 21:54:39.081334  165023 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:54:39.081436  165023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:54:39.095000  165023 system_svc.go:56] duration metric: took 13.649095ms WaitForService to wait for kubelet
	I1013 21:54:39.095032  165023 kubeadm.go:586] duration metric: took 4.926997551s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:54:39.095050  165023 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:54:39.098012  165023 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 21:54:39.098039  165023 node_conditions.go:123] node cpu capacity is 2
	I1013 21:54:39.098051  165023 node_conditions.go:105] duration metric: took 2.995792ms to run NodePressure ...
	I1013 21:54:39.098063  165023 start.go:241] waiting for startup goroutines ...
	I1013 21:54:39.098074  165023 start.go:246] waiting for cluster config update ...
	I1013 21:54:39.098086  165023 start.go:255] writing updated cluster config ...
	I1013 21:54:39.098393  165023 ssh_runner.go:195] Run: rm -f paused
	I1013 21:54:39.102080  165023 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:54:39.102591  165023 kapi.go:59] client config for pause-609677: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.key", CAFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120110), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 21:54:39.107647  165023 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9hxkk" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 21:54:41.113255  165023 pod_ready.go:104] pod "coredns-66bc5c9577-9hxkk" is not "Ready", error: <nil>
	W1013 21:54:43.614804  165023 pod_ready.go:104] pod "coredns-66bc5c9577-9hxkk" is not "Ready", error: <nil>
	W1013 21:54:46.113050  165023 pod_ready.go:104] pod "coredns-66bc5c9577-9hxkk" is not "Ready", error: <nil>
	I1013 21:54:47.114389  165023 pod_ready.go:94] pod "coredns-66bc5c9577-9hxkk" is "Ready"
	I1013 21:54:47.114415  165023 pod_ready.go:86] duration metric: took 8.006744552s for pod "coredns-66bc5c9577-9hxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.117044  165023 pod_ready.go:83] waiting for pod "etcd-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.121531  165023 pod_ready.go:94] pod "etcd-pause-609677" is "Ready"
	I1013 21:54:47.121559  165023 pod_ready.go:86] duration metric: took 4.490924ms for pod "etcd-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.124044  165023 pod_ready.go:83] waiting for pod "kube-apiserver-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.128683  165023 pod_ready.go:94] pod "kube-apiserver-pause-609677" is "Ready"
	I1013 21:54:47.128709  165023 pod_ready.go:86] duration metric: took 4.641049ms for pod "kube-apiserver-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.130943  165023 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 21:54:49.137664  165023 pod_ready.go:104] pod "kube-controller-manager-pause-609677" is not "Ready", error: <nil>
	I1013 21:54:49.636154  165023 pod_ready.go:94] pod "kube-controller-manager-pause-609677" is "Ready"
	I1013 21:54:49.636186  165023 pod_ready.go:86] duration metric: took 2.505216604s for pod "kube-controller-manager-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:49.638426  165023 pod_ready.go:83] waiting for pod "kube-proxy-6zl75" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:49.912651  165023 pod_ready.go:94] pod "kube-proxy-6zl75" is "Ready"
	I1013 21:54:49.912686  165023 pod_ready.go:86] duration metric: took 274.239669ms for pod "kube-proxy-6zl75" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:50.112252  165023 pod_ready.go:83] waiting for pod "kube-scheduler-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:50.512462  165023 pod_ready.go:94] pod "kube-scheduler-pause-609677" is "Ready"
	I1013 21:54:50.512487  165023 pod_ready.go:86] duration metric: took 400.209055ms for pod "kube-scheduler-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:50.512500  165023 pod_ready.go:40] duration metric: took 11.410391122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:54:50.568862  165023 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 21:54:50.573836  165023 out.go:179] * Done! kubectl is now configured to use "pause-609677" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.777906798Z" level=info msg="Starting container: e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6" id=c348e06e-58d3-4b43-9fc9-585305443851 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.781589448Z" level=info msg="Started container" PID=2375 containerID=37fc253c25c3b33e95033f66d59c43bec2cc24937ab8865d0b893a1ca44bd78b description=kube-system/kube-proxy-6zl75/kube-proxy id=17261b53-d28a-44ae-85df-428d8d9aea49 name=/runtime.v1.RuntimeService/StartContainer sandboxID=80f4d2924ee9954d9ed6dcbd3b752ef627b5d70691f6abed000084abb3767dc0
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.791265365Z" level=info msg="Started container" PID=2381 containerID=e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6 description=kube-system/coredns-66bc5c9577-9hxkk/coredns id=c348e06e-58d3-4b43-9fc9-585305443851 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4494ec7ec867a68424517e93d2d6e1bcbdc3a770231f6f45ab80a0fa74ced8e3
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.796181779Z" level=info msg="Creating container: kube-system/kube-scheduler-pause-609677/kube-scheduler" id=f5af2cbd-254a-47c6-a74b-4c545e80a5a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.796508343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.802874892Z" level=info msg="Created container 23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675: kube-system/kube-controller-manager-pause-609677/kube-controller-manager" id=c6ab3cf1-1e9b-4388-87b2-653cb93d1fce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.814276728Z" level=info msg="Starting container: 23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675" id=4efeb6f7-e0f4-4710-962e-b3cddb362061 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.824476479Z" level=info msg="Started container" PID=2389 containerID=23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675 description=kube-system/kube-controller-manager-pause-609677/kube-controller-manager id=4efeb6f7-e0f4-4710-962e-b3cddb362061 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f50d7fbb522400ec852376789b5360b2b5139c3262d1a0e39a868280b0f64d6
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.825003374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.825638458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.860694331Z" level=info msg="Created container e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7: kube-system/kube-scheduler-pause-609677/kube-scheduler" id=f5af2cbd-254a-47c6-a74b-4c545e80a5a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.861302208Z" level=info msg="Starting container: e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7" id=fa03b318-8c65-4c92-b2f5-8d8803acd78e name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.862992231Z" level=info msg="Started container" PID=2411 containerID=e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7 description=kube-system/kube-scheduler-pause-609677/kube-scheduler id=fa03b318-8c65-4c92-b2f5-8d8803acd78e name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8ef35d0dc281196139fb6003d049dad58c8ff1f552b5f3134422ee5ccfce964
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.023283895Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.031736477Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.031805169Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.031838727Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.039761099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.040031377Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.040158003Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.047074315Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.047110794Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.047138502Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.052054062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.052093216Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e813b1e3d6518       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago       Running             kube-scheduler            1                   c8ef35d0dc281       kube-scheduler-pause-609677            kube-system
	23af4780ca7ff       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago       Running             kube-controller-manager   1                   4f50d7fbb5224       kube-controller-manager-pause-609677   kube-system
	e66556a02af29       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   18 seconds ago       Running             coredns                   1                   4494ec7ec867a       coredns-66bc5c9577-9hxkk               kube-system
	37fc253c25c3b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   18 seconds ago       Running             kube-proxy                1                   80f4d2924ee99       kube-proxy-6zl75                       kube-system
	1dd85a64ce20b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   18 seconds ago       Running             kindnet-cni               1                   af594200932e0       kindnet-gbt7d                          kube-system
	fc8e2ea74687b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago       Running             etcd                      1                   42c23535fb7b6       etcd-pause-609677                      kube-system
	b779d4fe4cba3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago       Running             kube-apiserver            1                   724e464cfb9f6       kube-apiserver-pause-609677            kube-system
	20c5aff0e9704       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   4494ec7ec867a       coredns-66bc5c9577-9hxkk               kube-system
	e633956403c8d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   af594200932e0       kindnet-gbt7d                          kube-system
	f07f3c1fea64d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   80f4d2924ee99       kube-proxy-6zl75                       kube-system
	cb21928f370fe       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4f50d7fbb5224       kube-controller-manager-pause-609677   kube-system
	da8fa92310de2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   c8ef35d0dc281       kube-scheduler-pause-609677            kube-system
	082f903b4adc7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   724e464cfb9f6       kube-apiserver-pause-609677            kube-system
	c60258466c3d1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   42c23535fb7b6       etcd-pause-609677                      kube-system
	
	
	==> coredns [20c5aff0e9704a0e9cf80f1bc3097b3adcc89c040fb42664c031162cc8af3eee] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60356 - 27278 "HINFO IN 2506169572643410066.3812097225966522119. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013754059s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60875 - 45285 "HINFO IN 196991321812913998.3637127665570046469. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013266967s
	
	
	==> describe nodes <==
	Name:               pause-609677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-609677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=pause-609677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_53_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:53:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-609677
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:54:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:53:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:53:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:53:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-609677
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebd21760060d4a39853f563197507e5d
	  System UUID:                0def1069-5034-4287-905d-8502ad76088b
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9hxkk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-609677                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         80s
	  kube-system                 kindnet-gbt7d                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-609677             250m (12%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-pause-609677    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-6zl75                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-609677             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node pause-609677 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node pause-609677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     89s (x8 over 89s)  kubelet          Node pause-609677 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 80s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-609677 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-609677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-609677 status is now: NodeHasSufficientPID
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           76s                node-controller  Node pause-609677 event: Registered Node pause-609677 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-609677 status is now: NodeReady
	  Warning  ContainerGCFailed        20s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           12s                node-controller  Node pause-609677 event: Registered Node pause-609677 in Controller
	
	
	==> dmesg <==
	[Oct13 21:28] overlayfs: idmapped layers are currently not supported
	[  +4.197577] overlayfs: idmapped layers are currently not supported
	[Oct13 21:29] overlayfs: idmapped layers are currently not supported
	[ +40.174368] overlayfs: idmapped layers are currently not supported
	[Oct13 21:30] hrtimer: interrupt took 51471165 ns
	[Oct13 21:31] overlayfs: idmapped layers are currently not supported
	[Oct13 21:36] overlayfs: idmapped layers are currently not supported
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c60258466c3d1703891e0584fffda2246e17fe642e25b99afaf0cb9e26934b79] <==
	{"level":"warn","ts":"2025-10-13T21:53:27.976879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.027421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.126198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.140109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.224374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.248291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.459769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:54:26.028785Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:54:26.028849Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-609677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-13T21:54:26.028952Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:54:26.173749Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:54:26.173847Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.173891Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-10-13T21:54:26.173916Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:54:26.173947Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-13T21:54:26.173946Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-13T21:54:26.173955Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.173958Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-13T21:54:26.174008Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:54:26.174019Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:54:26.174027Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.177296Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-13T21:54:26.177385Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.177424Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T21:54:26.177431Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-609677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [fc8e2ea74687b306f0b75c55f4977da5505971a5a74789317b3fab98a5e92f03] <==
	{"level":"warn","ts":"2025-10-13T21:54:36.797905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.819857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.830353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.845972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.866996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.876239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.899635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.913981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.931336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.961101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.976026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.990247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.005518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.024810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.042156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.060607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.074760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.094549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.102614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.116466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.138186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.167089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.179575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.194211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.260754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45564","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:54:53 up  1:37,  0 user,  load average: 2.27, 2.99, 2.43
	Linux pause-609677 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1dd85a64ce20b9dcdee25927856a7136cc2ca59b9128254415532367203a522a] <==
	I1013 21:54:34.873565       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:54:34.876225       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 21:54:34.876380       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:54:34.876402       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:54:34.876414       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:54:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:54:35.016357       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:54:35.031968       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:54:35.032064       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:54:35.033057       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 21:54:38.134080       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:54:38.134124       1 metrics.go:72] Registering metrics
	I1013 21:54:38.134176       1 controller.go:711] "Syncing nftables rules"
	I1013 21:54:45.019857       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 21:54:45.019923       1 main.go:301] handling current node
	
	
	==> kindnet [e633956403c8d2bcdad3ed466b8fd307d41d45bfbb6d078f6e5f72f0c194d2ee] <==
	I1013 21:53:40.026268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:53:40.039955       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 21:53:40.040195       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:53:40.040243       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:53:40.040286       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:53:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:53:40.212395       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:53:40.212468       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:53:40.212500       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:53:40.212638       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 21:54:10.213064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 21:54:10.213258       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 21:54:10.213379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 21:54:10.214647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 21:54:11.813079       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:54:11.813111       1 metrics.go:72] Registering metrics
	I1013 21:54:11.813189       1 controller.go:711] "Syncing nftables rules"
	I1013 21:54:20.212400       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 21:54:20.212456       1 main.go:301] handling current node
	
	
	==> kube-apiserver [082f903b4adc7c39f642e460f3161aa4ba0f568fdd713a3bde9e5748752b5eb7] <==
	W1013 21:54:26.052984       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053046       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053098       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053167       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053225       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053277       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053328       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053380       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053428       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053478       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053529       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053577       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053629       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053708       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053759       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053810       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054018       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054064       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054108       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054176       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054951       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.055004       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.055060       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.055450       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b779d4fe4cba319c08ada9835653c4429eb4ab3cfcac3fd8b6f5055e1b826f3d] <==
	I1013 21:54:38.029565       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 21:54:38.053836       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 21:54:38.056458       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 21:54:38.058951       1 policy_source.go:240] refreshing policies
	I1013 21:54:38.059216       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:54:38.080891       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 21:54:38.080928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 21:54:38.081086       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 21:54:38.081960       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 21:54:38.082059       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 21:54:38.082098       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:54:38.095616       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 21:54:38.095799       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 21:54:38.096140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 21:54:38.104972       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 21:54:38.107464       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 21:54:38.126468       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 21:54:38.137445       1 cache.go:39] Caches are synced for autoregister controller
	E1013 21:54:38.203929       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 21:54:38.773545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:54:39.987730       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:54:41.556444       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 21:54:41.607497       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 21:54:41.655372       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 21:54:41.757133       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675] <==
	I1013 21:54:41.366385       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:54:41.367603       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 21:54:41.368784       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 21:54:41.373074       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 21:54:41.375290       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 21:54:41.377540       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 21:54:41.383849       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:54:41.383869       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:54:41.383878       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:54:41.388167       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 21:54:41.388170       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:54:41.390544       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:54:41.390627       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:54:41.390694       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-609677"
	I1013 21:54:41.390736       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 21:54:41.393501       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:54:41.394315       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 21:54:41.398899       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:54:41.399075       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 21:54:41.399733       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:54:41.399763       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 21:54:41.399798       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 21:54:41.406941       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:54:41.408204       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:54:41.408232       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [cb21928f370feb3f97bfbce9a8d34340e56cbc857eadc6accb9f1d851d0886c3] <==
	I1013 21:53:37.911084       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:53:37.911096       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 21:53:37.912307       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 21:53:37.916093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:53:37.919282       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 21:53:37.921611       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 21:53:37.921742       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 21:53:37.921799       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:53:37.923396       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:53:37.923557       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 21:53:37.923804       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 21:53:37.924740       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 21:53:37.929516       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 21:53:37.929636       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 21:53:37.929704       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 21:53:37.929741       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 21:53:37.929771       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 21:53:37.934353       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:53:37.935199       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 21:53:37.940144       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-609677" podCIDRs=["10.244.0.0/24"]
	I1013 21:53:37.940275       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 21:53:37.971935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:53:37.972019       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:53:37.972053       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:54:22.886105       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [37fc253c25c3b33e95033f66d59c43bec2cc24937ab8865d0b893a1ca44bd78b] <==
	I1013 21:54:35.306324       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:54:35.798718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:54:38.231892       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:54:38.231940       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 21:54:38.232018       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:54:38.292157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:54:38.292267       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:54:38.307199       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:54:38.307567       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:54:38.307625       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:54:38.313431       1 config.go:200] "Starting service config controller"
	I1013 21:54:38.313461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:54:38.319929       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:54:38.319950       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:54:38.319968       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:54:38.319973       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:54:38.320646       1 config.go:309] "Starting node config controller"
	I1013 21:54:38.320698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:54:38.320727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:54:38.414412       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:54:38.420716       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:54:38.420818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f07f3c1fea64da214073a546691f51cbeed5401814bdfecf8c5c2d7d965b76dc] <==
	I1013 21:53:40.021413       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:53:40.125827       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:53:40.229542       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:53:40.229603       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 21:53:40.229689       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:53:40.247580       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:53:40.247702       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:53:40.251026       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:53:40.251409       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:53:40.251580       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:53:40.252837       1 config.go:200] "Starting service config controller"
	I1013 21:53:40.252891       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:53:40.252937       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:53:40.252963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:53:40.252999       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:53:40.253025       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:53:40.253679       1 config.go:309] "Starting node config controller"
	I1013 21:53:40.253728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:53:40.253755       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:53:40.354378       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:53:40.354458       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 21:53:40.354684       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035] <==
	E1013 21:53:30.197237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:53:30.197289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:53:30.197341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 21:53:30.197385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 21:53:30.197423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:53:30.197622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:53:30.197674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 21:53:30.197716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:53:30.197809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:53:30.197845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:53:30.197880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 21:53:31.012970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:53:31.025121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:53:31.158649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:53:31.172387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:53:31.192080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:53:31.237394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:53:31.360123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1013 21:53:33.061884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:26.030259       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:54:26.030283       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:54:26.030304       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:54:26.030332       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:26.030651       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:54:26.030671       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7] <==
	I1013 21:54:35.625325       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:54:37.964591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:54:37.964688       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:54:37.964739       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:54:37.964770       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:54:38.152360       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:54:38.152444       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:54:38.154597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:38.161050       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:38.161988       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:54:38.162059       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:54:38.261231       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.717763    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15c7bbf70c5a0488c39e702688025350" pod="kube-system/kube-controller-manager-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.718164    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zl75\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e" pod="kube-system/kube-proxy-6zl75"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.719226    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbt7d\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="41ae05d3-8177-4a09-8617-d9c26c154582" pod="kube-system/kindnet-gbt7d"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.719586    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9hxkk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22b9d94d-f872-48ad-a5fa-77a5bd5186d1" pod="kube-system/coredns-66bc5c9577-9hxkk"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.719965    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="001e476ef89ab0bf83975cc494875b98" pod="kube-system/kube-scheduler-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.723279    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bbc8caca6b8b0f86abc0592289625533" pod="kube-system/etcd-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.723679    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f070d7ef2b4970a3dd205675fce8604a" pod="kube-system/kube-apiserver-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: I1013 21:54:34.739007    1311 scope.go:117] "RemoveContainer" containerID="da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.739520    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bbc8caca6b8b0f86abc0592289625533" pod="kube-system/etcd-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.739716    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f070d7ef2b4970a3dd205675fce8604a" pod="kube-system/kube-apiserver-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.739985    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15c7bbf70c5a0488c39e702688025350" pod="kube-system/kube-controller-manager-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740177    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zl75\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e" pod="kube-system/kube-proxy-6zl75"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740407    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbt7d\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="41ae05d3-8177-4a09-8617-d9c26c154582" pod="kube-system/kindnet-gbt7d"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740611    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9hxkk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22b9d94d-f872-48ad-a5fa-77a5bd5186d1" pod="kube-system/coredns-66bc5c9577-9hxkk"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740823    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="001e476ef89ab0bf83975cc494875b98" pod="kube-system/kube-scheduler-pause-609677"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.816257    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="001e476ef89ab0bf83975cc494875b98" pod="kube-system/kube-scheduler-pause-609677"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.832066    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-609677\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.832262    1311 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-609677\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.919197    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="bbc8caca6b8b0f86abc0592289625533" pod="kube-system/etcd-pause-609677"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.964107    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="f070d7ef2b4970a3dd205675fce8604a" pod="kube-system/kube-apiserver-pause-609677"
	Oct 13 21:54:38 pause-609677 kubelet[1311]: E1013 21:54:38.035759    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="15c7bbf70c5a0488c39e702688025350" pod="kube-system/kube-controller-manager-pause-609677"
	Oct 13 21:54:43 pause-609677 kubelet[1311]: W1013 21:54:43.699443    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 13 21:54:51 pause-609677 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 21:54:51 pause-609677 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 21:54:51 pause-609677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-609677 -n pause-609677
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-609677 -n pause-609677: exit status 2 (375.991465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-609677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-609677
helpers_test.go:243: (dbg) docker inspect pause-609677:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6",
	        "Created": "2025-10-13T21:53:06.840898932Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159570,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T21:53:06.909810941Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/hosts",
	        "LogPath": "/var/lib/docker/containers/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6/b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6-json.log",
	        "Name": "/pause-609677",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-609677:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-609677",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1a494f13009c9d28b4063f7ca3fdf836a1996323ea7cca934c720d7f2aeccd6",
	                "LowerDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/423b897cc0f3c9290302a9eae77595f358ac8e8bc5daf507f28ef35bad6a168a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-609677",
	                "Source": "/var/lib/docker/volumes/pause-609677/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-609677",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-609677",
	                "name.minikube.sigs.k8s.io": "pause-609677",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eeda4ca35523fbf936044f54f396a57e4f04bb2b1e984f2fba454cbbaed4c2f",
	            "SandboxKey": "/var/run/docker/netns/6eeda4ca3552",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33026"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33027"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-609677": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:b4:7e:be:7a:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74a66ab38d5c3574cbcbdbc75813b7741e9ba5eaed628c4754810bd95dd597b9",
	                    "EndpointID": "156119fd85cd461e917b6f70f91f3dff8b99f392ae98e8a0da12ef0e5cc97a01",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-609677",
	                        "b1a494f13009"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-609677 -n pause-609677
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-609677 -n pause-609677: exit status 2 (347.369511ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-609677 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-609677 logs -n 25: (1.319124043s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-585265 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │ 13 Oct 25 21:49 UTC │
	│ ssh     │ -p NoKubernetes-585265 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │                     │
	│ stop    │ -p NoKubernetes-585265                                                                                                                   │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │ 13 Oct 25 21:49 UTC │
	│ start   │ -p NoKubernetes-585265 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:49 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p missing-upgrade-403510 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-403510    │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ ssh     │ -p NoKubernetes-585265 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │                     │
	│ delete  │ -p NoKubernetes-585265                                                                                                                   │ NoKubernetes-585265       │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ stop    │ -p kubernetes-upgrade-304765                                                                                                             │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:53 UTC │
	│ delete  │ -p missing-upgrade-403510                                                                                                                │ missing-upgrade-403510    │ jenkins │ v1.37.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:50 UTC │
	│ start   │ -p stopped-upgrade-014468 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-014468    │ jenkins │ v1.32.0 │ 13 Oct 25 21:50 UTC │ 13 Oct 25 21:51 UTC │
	│ stop    │ stopped-upgrade-014468 stop                                                                                                              │ stopped-upgrade-014468    │ jenkins │ v1.32.0 │ 13 Oct 25 21:51 UTC │ 13 Oct 25 21:51 UTC │
	│ start   │ -p stopped-upgrade-014468 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-014468    │ jenkins │ v1.37.0 │ 13 Oct 25 21:51 UTC │ 13 Oct 25 21:51 UTC │
	│ delete  │ -p stopped-upgrade-014468                                                                                                                │ stopped-upgrade-014468    │ jenkins │ v1.37.0 │ 13 Oct 25 21:51 UTC │ 13 Oct 25 21:52 UTC │
	│ start   │ -p running-upgrade-601721 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-601721    │ jenkins │ v1.32.0 │ 13 Oct 25 21:52 UTC │ 13 Oct 25 21:52 UTC │
	│ start   │ -p running-upgrade-601721 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-601721    │ jenkins │ v1.37.0 │ 13 Oct 25 21:52 UTC │ 13 Oct 25 21:52 UTC │
	│ delete  │ -p running-upgrade-601721                                                                                                                │ running-upgrade-601721    │ jenkins │ v1.37.0 │ 13 Oct 25 21:52 UTC │ 13 Oct 25 21:53 UTC │
	│ start   │ -p pause-609677 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-609677              │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │ 13 Oct 25 21:54 UTC │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │ 13 Oct 25 21:53 UTC │
	│ delete  │ -p kubernetes-upgrade-304765                                                                                                             │ kubernetes-upgrade-304765 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │ 13 Oct 25 21:53 UTC │
	│ start   │ -p force-systemd-flag-257205 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 21:53 UTC │                     │
	│ start   │ -p pause-609677 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-609677              │ jenkins │ v1.37.0 │ 13 Oct 25 21:54 UTC │ 13 Oct 25 21:54 UTC │
	│ pause   │ -p pause-609677 --alsologtostderr -v=5                                                                                                   │ pause-609677              │ jenkins │ v1.37.0 │ 13 Oct 25 21:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 21:54:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 21:54:23.746379  165023 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:54:23.746559  165023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:54:23.746570  165023 out.go:374] Setting ErrFile to fd 2...
	I1013 21:54:23.746575  165023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:54:23.746869  165023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:54:23.747257  165023 out.go:368] Setting JSON to false
	I1013 21:54:23.748282  165023 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5798,"bootTime":1760386666,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:54:23.748350  165023 start.go:141] virtualization:  
	I1013 21:54:23.753335  165023 out.go:179] * [pause-609677] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:54:23.756276  165023 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:54:23.756382  165023 notify.go:220] Checking for updates...
	I1013 21:54:23.762168  165023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:54:23.765099  165023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:54:23.768143  165023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:54:23.770962  165023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:54:23.773877  165023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:54:23.777270  165023 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:54:23.777810  165023 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:54:23.809288  165023 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:54:23.809401  165023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:54:23.879517  165023 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 21:54:23.866083137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:54:23.879654  165023 docker.go:318] overlay module found
	I1013 21:54:23.882796  165023 out.go:179] * Using the docker driver based on existing profile
	I1013 21:54:23.885573  165023 start.go:305] selected driver: docker
	I1013 21:54:23.885590  165023 start.go:925] validating driver "docker" against &{Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:54:23.885730  165023 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:54:23.885829  165023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:54:23.940215  165023 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 21:54:23.93120903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:54:23.940975  165023 cni.go:84] Creating CNI manager for ""
	I1013 21:54:23.941043  165023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:54:23.941094  165023 start.go:349] cluster config:
	{Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:54:23.944302  165023 out.go:179] * Starting "pause-609677" primary control-plane node in "pause-609677" cluster
	I1013 21:54:23.947082  165023 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 21:54:23.949971  165023 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 21:54:23.952746  165023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:54:23.952793  165023 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 21:54:23.952805  165023 cache.go:58] Caching tarball of preloaded images
	I1013 21:54:23.952819  165023 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 21:54:23.952881  165023 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 21:54:23.952891  165023 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 21:54:23.953036  165023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/config.json ...
	I1013 21:54:23.972530  165023 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 21:54:23.972553  165023 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 21:54:23.972578  165023 cache.go:232] Successfully downloaded all kic artifacts
	I1013 21:54:23.972600  165023 start.go:360] acquireMachinesLock for pause-609677: {Name:mkeef98324cfd0451e87c760720ad13d14880639 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 21:54:23.972669  165023 start.go:364] duration metric: took 42.468µs to acquireMachinesLock for "pause-609677"
	I1013 21:54:23.972691  165023 start.go:96] Skipping create...Using existing machine configuration
	I1013 21:54:23.972704  165023 fix.go:54] fixHost starting: 
	I1013 21:54:23.972973  165023 cli_runner.go:164] Run: docker container inspect pause-609677 --format={{.State.Status}}
	I1013 21:54:23.988906  165023 fix.go:112] recreateIfNeeded on pause-609677: state=Running err=<nil>
	W1013 21:54:23.988933  165023 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 21:54:23.992214  165023 out.go:252] * Updating the running docker "pause-609677" container ...
	I1013 21:54:23.992247  165023 machine.go:93] provisionDockerMachine start ...
	I1013 21:54:23.992321  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:24.012014  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:24.012357  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:24.012373  165023 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 21:54:24.159162  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-609677
	
	I1013 21:54:24.159182  165023 ubuntu.go:182] provisioning hostname "pause-609677"
	I1013 21:54:24.159240  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:24.176671  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:24.176985  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:24.177000  165023 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-609677 && echo "pause-609677" | sudo tee /etc/hostname
	I1013 21:54:24.332820  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-609677
	
	I1013 21:54:24.332904  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:24.350760  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:24.351068  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:24.351089  165023 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-609677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-609677/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-609677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 21:54:24.499869  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 21:54:24.499899  165023 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 21:54:24.499953  165023 ubuntu.go:190] setting up certificates
	I1013 21:54:24.499970  165023 provision.go:84] configureAuth start
	I1013 21:54:24.500046  165023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-609677
	I1013 21:54:24.516842  165023 provision.go:143] copyHostCerts
	I1013 21:54:24.516912  165023 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 21:54:24.516934  165023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 21:54:24.517018  165023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 21:54:24.517125  165023 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 21:54:24.517137  165023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 21:54:24.517166  165023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 21:54:24.517225  165023 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 21:54:24.517233  165023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 21:54:24.517258  165023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 21:54:24.517314  165023 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.pause-609677 san=[127.0.0.1 192.168.85.2 localhost minikube pause-609677]
	I1013 21:54:25.668857  165023 provision.go:177] copyRemoteCerts
	I1013 21:54:25.668927  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 21:54:25.668976  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:25.685908  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:25.791850  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 21:54:25.809320  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 21:54:25.827382  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 21:54:25.845429  165023 provision.go:87] duration metric: took 1.345436143s to configureAuth
	I1013 21:54:25.845507  165023 ubuntu.go:206] setting minikube options for container-runtime
	I1013 21:54:25.845757  165023 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:54:25.845867  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:25.863319  165023 main.go:141] libmachine: Using SSH client type: native
	I1013 21:54:25.863652  165023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33026 <nil> <nil>}
	I1013 21:54:25.863672  165023 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 21:54:31.314975  165023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 21:54:31.315003  165023 machine.go:96] duration metric: took 7.322746857s to provisionDockerMachine
	I1013 21:54:31.315016  165023 start.go:293] postStartSetup for "pause-609677" (driver="docker")
	I1013 21:54:31.315027  165023 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 21:54:31.315092  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 21:54:31.315151  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.334676  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.435585  165023 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 21:54:31.438987  165023 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 21:54:31.439013  165023 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 21:54:31.439024  165023 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 21:54:31.439078  165023 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 21:54:31.439157  165023 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 21:54:31.439263  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 21:54:31.446694  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:54:31.463718  165023 start.go:296] duration metric: took 148.686132ms for postStartSetup
	I1013 21:54:31.463851  165023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:54:31.463896  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.481289  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.580915  165023 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 21:54:31.585601  165023 fix.go:56] duration metric: took 7.612893567s for fixHost
	I1013 21:54:31.585622  165023 start.go:83] releasing machines lock for "pause-609677", held for 7.612941722s
	I1013 21:54:31.585685  165023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-609677
	I1013 21:54:31.601925  165023 ssh_runner.go:195] Run: cat /version.json
	I1013 21:54:31.601973  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.601998  165023 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 21:54:31.602061  165023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-609677
	I1013 21:54:31.629296  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.630257  165023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33026 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/pause-609677/id_rsa Username:docker}
	I1013 21:54:31.747589  165023 ssh_runner.go:195] Run: systemctl --version
	I1013 21:54:31.841575  165023 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 21:54:31.881980  165023 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 21:54:31.886831  165023 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 21:54:31.886950  165023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 21:54:31.895296  165023 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 21:54:31.895321  165023 start.go:495] detecting cgroup driver to use...
	I1013 21:54:31.895353  165023 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 21:54:31.895400  165023 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 21:54:31.910812  165023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 21:54:31.924011  165023 docker.go:218] disabling cri-docker service (if available) ...
	I1013 21:54:31.924082  165023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 21:54:31.939760  165023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 21:54:31.952915  165023 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 21:54:32.088919  165023 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 21:54:32.228031  165023 docker.go:234] disabling docker service ...
	I1013 21:54:32.228105  165023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 21:54:32.249778  165023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 21:54:32.263755  165023 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 21:54:32.404279  165023 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 21:54:32.551291  165023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 21:54:32.564170  165023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 21:54:32.578597  165023 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 21:54:32.578731  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.587931  165023 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 21:54:32.588046  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.597184  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.606259  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.615096  165023 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 21:54:32.623755  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.633316  165023 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.642050  165023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 21:54:32.655195  165023 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 21:54:32.666111  165023 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 21:54:32.674767  165023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:54:32.818269  165023 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 21:54:32.965107  165023 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 21:54:32.965233  165023 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 21:54:32.969050  165023 start.go:563] Will wait 60s for crictl version
	I1013 21:54:32.969116  165023 ssh_runner.go:195] Run: which crictl
	I1013 21:54:32.972525  165023 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 21:54:33.002933  165023 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 21:54:33.003109  165023 ssh_runner.go:195] Run: crio --version
	I1013 21:54:33.037712  165023 ssh_runner.go:195] Run: crio --version
	I1013 21:54:33.069332  165023 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 21:54:33.070477  165023 cli_runner.go:164] Run: docker network inspect pause-609677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 21:54:33.086048  165023 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 21:54:33.090076  165023 kubeadm.go:883] updating cluster {Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 21:54:33.090223  165023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 21:54:33.090287  165023 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:54:33.122239  165023 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:54:33.122263  165023 crio.go:433] Images already preloaded, skipping extraction
	I1013 21:54:33.122333  165023 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 21:54:33.148761  165023 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 21:54:33.148792  165023 cache_images.go:85] Images are preloaded, skipping loading
	I1013 21:54:33.148801  165023 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 21:54:33.148912  165023 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-609677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 21:54:33.148996  165023 ssh_runner.go:195] Run: crio config
	I1013 21:54:33.223327  165023 cni.go:84] Creating CNI manager for ""
	I1013 21:54:33.223396  165023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 21:54:33.223423  165023 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 21:54:33.223500  165023 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-609677 NodeName:pause-609677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 21:54:33.223678  165023 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-609677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 21:54:33.223808  165023 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 21:54:33.231658  165023 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 21:54:33.231752  165023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 21:54:33.239389  165023 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1013 21:54:33.252310  165023 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 21:54:33.266932  165023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1013 21:54:33.280069  165023 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 21:54:33.283695  165023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:54:33.421047  165023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:54:33.436332  165023 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677 for IP: 192.168.85.2
	I1013 21:54:33.436395  165023 certs.go:195] generating shared ca certs ...
	I1013 21:54:33.436423  165023 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:54:33.436583  165023 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 21:54:33.436662  165023 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 21:54:33.436707  165023 certs.go:257] generating profile certs ...
	I1013 21:54:33.436836  165023 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.key
	I1013 21:54:33.436942  165023 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/apiserver.key.e0590797
	I1013 21:54:33.437009  165023 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/proxy-client.key
	I1013 21:54:33.437152  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 21:54:33.437208  165023 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 21:54:33.437230  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 21:54:33.437288  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 21:54:33.437335  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 21:54:33.437373  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 21:54:33.437447  165023 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 21:54:33.438050  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 21:54:33.456940  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 21:54:33.476234  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 21:54:33.495304  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 21:54:33.515262  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1013 21:54:33.535686  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 21:54:33.555182  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 21:54:33.574041  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 21:54:33.591195  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 21:54:33.609994  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 21:54:33.627418  165023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 21:54:33.644805  165023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 21:54:33.657769  165023 ssh_runner.go:195] Run: openssl version
	I1013 21:54:33.664359  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 21:54:33.675337  165023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:54:33.679142  165023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:54:33.679204  165023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 21:54:33.720535  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 21:54:33.728027  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 21:54:33.735579  165023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 21:54:33.739001  165023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 21:54:33.739058  165023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 21:54:33.779574  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 21:54:33.787008  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 21:54:33.794477  165023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 21:54:33.797781  165023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 21:54:33.797840  165023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 21:54:33.838478  165023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 21:54:33.845965  165023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 21:54:33.849386  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 21:54:33.889777  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 21:54:33.931020  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 21:54:33.972493  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 21:54:34.015199  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 21:54:34.056891  165023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 21:54:34.098551  165023 kubeadm.go:400] StartCluster: {Name:pause-609677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-609677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:54:34.098670  165023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 21:54:34.098732  165023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 21:54:34.131819  165023 cri.go:89] found id: "20c5aff0e9704a0e9cf80f1bc3097b3adcc89c040fb42664c031162cc8af3eee"
	I1013 21:54:34.131840  165023 cri.go:89] found id: "e633956403c8d2bcdad3ed466b8fd307d41d45bfbb6d078f6e5f72f0c194d2ee"
	I1013 21:54:34.131844  165023 cri.go:89] found id: "f07f3c1fea64da214073a546691f51cbeed5401814bdfecf8c5c2d7d965b76dc"
	I1013 21:54:34.131848  165023 cri.go:89] found id: "cb21928f370feb3f97bfbce9a8d34340e56cbc857eadc6accb9f1d851d0886c3"
	I1013 21:54:34.131851  165023 cri.go:89] found id: "da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035"
	I1013 21:54:34.131854  165023 cri.go:89] found id: "082f903b4adc7c39f642e460f3161aa4ba0f568fdd713a3bde9e5748752b5eb7"
	I1013 21:54:34.131857  165023 cri.go:89] found id: "c60258466c3d1703891e0584fffda2246e17fe642e25b99afaf0cb9e26934b79"
	I1013 21:54:34.131860  165023 cri.go:89] found id: ""
	I1013 21:54:34.131909  165023 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 21:54:34.142394  165023 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:54:34Z" level=error msg="open /run/runc: no such file or directory"
	I1013 21:54:34.142485  165023 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 21:54:34.150046  165023 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 21:54:34.150066  165023 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 21:54:34.150145  165023 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 21:54:34.157296  165023 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 21:54:34.157841  165023 kubeconfig.go:125] found "pause-609677" server: "https://192.168.85.2:8443"
	I1013 21:54:34.158409  165023 kapi.go:59] client config for pause-609677: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.key", CAFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120110), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 21:54:34.158867  165023 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1013 21:54:34.158888  165023 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1013 21:54:34.158893  165023 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1013 21:54:34.158898  165023 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1013 21:54:34.158903  165023 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1013 21:54:34.159183  165023 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 21:54:34.166859  165023 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 21:54:34.166893  165023 kubeadm.go:601] duration metric: took 16.820973ms to restartPrimaryControlPlane
	I1013 21:54:34.166902  165023 kubeadm.go:402] duration metric: took 68.361154ms to StartCluster
	I1013 21:54:34.166949  165023 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:54:34.167037  165023 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:54:34.167677  165023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 21:54:34.168003  165023 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 21:54:34.168288  165023 config.go:182] Loaded profile config "pause-609677": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:54:34.168356  165023 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 21:54:34.169365  165023 out.go:179] * Verifying Kubernetes components...
	I1013 21:54:34.170007  165023 out.go:179] * Enabled addons: 
	I1013 21:54:34.170840  165023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 21:54:34.171446  165023 addons.go:514] duration metric: took 3.081444ms for enable addons: enabled=[]
	I1013 21:54:34.312030  165023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 21:54:34.325738  165023 node_ready.go:35] waiting up to 6m0s for node "pause-609677" to be "Ready" ...
	I1013 21:54:38.037753  165023 node_ready.go:49] node "pause-609677" is "Ready"
	I1013 21:54:38.037781  165023 node_ready.go:38] duration metric: took 3.712008029s for node "pause-609677" to be "Ready" ...
	I1013 21:54:38.037794  165023 api_server.go:52] waiting for apiserver process to appear ...
	I1013 21:54:38.037851  165023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:54:38.061550  165023 api_server.go:72] duration metric: took 3.89351116s to wait for apiserver process to appear ...
	I1013 21:54:38.061574  165023 api_server.go:88] waiting for apiserver healthz status ...
	I1013 21:54:38.061629  165023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 21:54:38.184114  165023 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 21:54:38.184183  165023 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 21:54:38.561670  165023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 21:54:38.570410  165023 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 21:54:38.570438  165023 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 21:54:39.062275  165023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 21:54:39.070721  165023 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 21:54:39.071857  165023 api_server.go:141] control plane version: v1.34.1
	I1013 21:54:39.071884  165023 api_server.go:131] duration metric: took 1.010280481s to wait for apiserver health ...
	I1013 21:54:39.071895  165023 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 21:54:39.075535  165023 system_pods.go:59] 7 kube-system pods found
	I1013 21:54:39.075573  165023 system_pods.go:61] "coredns-66bc5c9577-9hxkk" [22b9d94d-f872-48ad-a5fa-77a5bd5186d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:54:39.075584  165023 system_pods.go:61] "etcd-pause-609677" [40e77bdb-3010-4ec4-8431-0bb665621837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 21:54:39.075590  165023 system_pods.go:61] "kindnet-gbt7d" [41ae05d3-8177-4a09-8617-d9c26c154582] Running
	I1013 21:54:39.075596  165023 system_pods.go:61] "kube-apiserver-pause-609677" [1b8b1ce4-baf2-4e1e-b1cb-ceafdc25add4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 21:54:39.075604  165023 system_pods.go:61] "kube-controller-manager-pause-609677" [ab49c9ae-c703-47c0-a878-d45b799d6592] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 21:54:39.075614  165023 system_pods.go:61] "kube-proxy-6zl75" [ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e] Running
	I1013 21:54:39.075622  165023 system_pods.go:61] "kube-scheduler-pause-609677" [1f6f9b94-01cc-421d-b461-d5cdf4c7dd42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 21:54:39.075630  165023 system_pods.go:74] duration metric: took 3.727818ms to wait for pod list to return data ...
	I1013 21:54:39.075650  165023 default_sa.go:34] waiting for default service account to be created ...
	I1013 21:54:39.078230  165023 default_sa.go:45] found service account: "default"
	I1013 21:54:39.078257  165023 default_sa.go:55] duration metric: took 2.600389ms for default service account to be created ...
	I1013 21:54:39.078267  165023 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 21:54:39.081114  165023 system_pods.go:86] 7 kube-system pods found
	I1013 21:54:39.081161  165023 system_pods.go:89] "coredns-66bc5c9577-9hxkk" [22b9d94d-f872-48ad-a5fa-77a5bd5186d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 21:54:39.081208  165023 system_pods.go:89] "etcd-pause-609677" [40e77bdb-3010-4ec4-8431-0bb665621837] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 21:54:39.081228  165023 system_pods.go:89] "kindnet-gbt7d" [41ae05d3-8177-4a09-8617-d9c26c154582] Running
	I1013 21:54:39.081244  165023 system_pods.go:89] "kube-apiserver-pause-609677" [1b8b1ce4-baf2-4e1e-b1cb-ceafdc25add4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 21:54:39.081254  165023 system_pods.go:89] "kube-controller-manager-pause-609677" [ab49c9ae-c703-47c0-a878-d45b799d6592] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 21:54:39.081282  165023 system_pods.go:89] "kube-proxy-6zl75" [ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e] Running
	I1013 21:54:39.081309  165023 system_pods.go:89] "kube-scheduler-pause-609677" [1f6f9b94-01cc-421d-b461-d5cdf4c7dd42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 21:54:39.081324  165023 system_pods.go:126] duration metric: took 3.051061ms to wait for k8s-apps to be running ...
	I1013 21:54:39.081334  165023 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 21:54:39.081436  165023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:54:39.095000  165023 system_svc.go:56] duration metric: took 13.649095ms WaitForService to wait for kubelet
	I1013 21:54:39.095032  165023 kubeadm.go:586] duration metric: took 4.926997551s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 21:54:39.095050  165023 node_conditions.go:102] verifying NodePressure condition ...
	I1013 21:54:39.098012  165023 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 21:54:39.098039  165023 node_conditions.go:123] node cpu capacity is 2
	I1013 21:54:39.098051  165023 node_conditions.go:105] duration metric: took 2.995792ms to run NodePressure ...
	I1013 21:54:39.098063  165023 start.go:241] waiting for startup goroutines ...
	I1013 21:54:39.098074  165023 start.go:246] waiting for cluster config update ...
	I1013 21:54:39.098086  165023 start.go:255] writing updated cluster config ...
	I1013 21:54:39.098393  165023 ssh_runner.go:195] Run: rm -f paused
	I1013 21:54:39.102080  165023 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:54:39.102591  165023 kapi.go:59] client config for pause-609677: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.crt", KeyFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/profiles/pause-609677/client.key", CAFile:"/home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120110), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1013 21:54:39.107647  165023 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9hxkk" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 21:54:41.113255  165023 pod_ready.go:104] pod "coredns-66bc5c9577-9hxkk" is not "Ready", error: <nil>
	W1013 21:54:43.614804  165023 pod_ready.go:104] pod "coredns-66bc5c9577-9hxkk" is not "Ready", error: <nil>
	W1013 21:54:46.113050  165023 pod_ready.go:104] pod "coredns-66bc5c9577-9hxkk" is not "Ready", error: <nil>
	I1013 21:54:47.114389  165023 pod_ready.go:94] pod "coredns-66bc5c9577-9hxkk" is "Ready"
	I1013 21:54:47.114415  165023 pod_ready.go:86] duration metric: took 8.006744552s for pod "coredns-66bc5c9577-9hxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.117044  165023 pod_ready.go:83] waiting for pod "etcd-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.121531  165023 pod_ready.go:94] pod "etcd-pause-609677" is "Ready"
	I1013 21:54:47.121559  165023 pod_ready.go:86] duration metric: took 4.490924ms for pod "etcd-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.124044  165023 pod_ready.go:83] waiting for pod "kube-apiserver-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.128683  165023 pod_ready.go:94] pod "kube-apiserver-pause-609677" is "Ready"
	I1013 21:54:47.128709  165023 pod_ready.go:86] duration metric: took 4.641049ms for pod "kube-apiserver-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:47.130943  165023 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 21:54:49.137664  165023 pod_ready.go:104] pod "kube-controller-manager-pause-609677" is not "Ready", error: <nil>
	I1013 21:54:49.636154  165023 pod_ready.go:94] pod "kube-controller-manager-pause-609677" is "Ready"
	I1013 21:54:49.636186  165023 pod_ready.go:86] duration metric: took 2.505216604s for pod "kube-controller-manager-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:49.638426  165023 pod_ready.go:83] waiting for pod "kube-proxy-6zl75" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:49.912651  165023 pod_ready.go:94] pod "kube-proxy-6zl75" is "Ready"
	I1013 21:54:49.912686  165023 pod_ready.go:86] duration metric: took 274.239669ms for pod "kube-proxy-6zl75" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:50.112252  165023 pod_ready.go:83] waiting for pod "kube-scheduler-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:50.512462  165023 pod_ready.go:94] pod "kube-scheduler-pause-609677" is "Ready"
	I1013 21:54:50.512487  165023 pod_ready.go:86] duration metric: took 400.209055ms for pod "kube-scheduler-pause-609677" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 21:54:50.512500  165023 pod_ready.go:40] duration metric: took 11.410391122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 21:54:50.568862  165023 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 21:54:50.573836  165023 out.go:179] * Done! kubectl is now configured to use "pause-609677" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.777906798Z" level=info msg="Starting container: e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6" id=c348e06e-58d3-4b43-9fc9-585305443851 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.781589448Z" level=info msg="Started container" PID=2375 containerID=37fc253c25c3b33e95033f66d59c43bec2cc24937ab8865d0b893a1ca44bd78b description=kube-system/kube-proxy-6zl75/kube-proxy id=17261b53-d28a-44ae-85df-428d8d9aea49 name=/runtime.v1.RuntimeService/StartContainer sandboxID=80f4d2924ee9954d9ed6dcbd3b752ef627b5d70691f6abed000084abb3767dc0
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.791265365Z" level=info msg="Started container" PID=2381 containerID=e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6 description=kube-system/coredns-66bc5c9577-9hxkk/coredns id=c348e06e-58d3-4b43-9fc9-585305443851 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4494ec7ec867a68424517e93d2d6e1bcbdc3a770231f6f45ab80a0fa74ced8e3
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.796181779Z" level=info msg="Creating container: kube-system/kube-scheduler-pause-609677/kube-scheduler" id=f5af2cbd-254a-47c6-a74b-4c545e80a5a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.796508343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.802874892Z" level=info msg="Created container 23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675: kube-system/kube-controller-manager-pause-609677/kube-controller-manager" id=c6ab3cf1-1e9b-4388-87b2-653cb93d1fce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.814276728Z" level=info msg="Starting container: 23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675" id=4efeb6f7-e0f4-4710-962e-b3cddb362061 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.824476479Z" level=info msg="Started container" PID=2389 containerID=23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675 description=kube-system/kube-controller-manager-pause-609677/kube-controller-manager id=4efeb6f7-e0f4-4710-962e-b3cddb362061 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f50d7fbb522400ec852376789b5360b2b5139c3262d1a0e39a868280b0f64d6
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.825003374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.825638458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.860694331Z" level=info msg="Created container e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7: kube-system/kube-scheduler-pause-609677/kube-scheduler" id=f5af2cbd-254a-47c6-a74b-4c545e80a5a8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.861302208Z" level=info msg="Starting container: e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7" id=fa03b318-8c65-4c92-b2f5-8d8803acd78e name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 21:54:34 pause-609677 crio[2077]: time="2025-10-13T21:54:34.862992231Z" level=info msg="Started container" PID=2411 containerID=e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7 description=kube-system/kube-scheduler-pause-609677/kube-scheduler id=fa03b318-8c65-4c92-b2f5-8d8803acd78e name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8ef35d0dc281196139fb6003d049dad58c8ff1f552b5f3134422ee5ccfce964
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.023283895Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.031736477Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.031805169Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.031838727Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.039761099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.040031377Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.040158003Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.047074315Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.047110794Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.047138502Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.052054062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 21:54:45 pause-609677 crio[2077]: time="2025-10-13T21:54:45.052093216Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e813b1e3d6518       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   c8ef35d0dc281       kube-scheduler-pause-609677            kube-system
	23af4780ca7ff       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   4f50d7fbb5224       kube-controller-manager-pause-609677   kube-system
	e66556a02af29       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   4494ec7ec867a       coredns-66bc5c9577-9hxkk               kube-system
	37fc253c25c3b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   80f4d2924ee99       kube-proxy-6zl75                       kube-system
	1dd85a64ce20b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   af594200932e0       kindnet-gbt7d                          kube-system
	fc8e2ea74687b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   42c23535fb7b6       etcd-pause-609677                      kube-system
	b779d4fe4cba3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   724e464cfb9f6       kube-apiserver-pause-609677            kube-system
	20c5aff0e9704       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   4494ec7ec867a       coredns-66bc5c9577-9hxkk               kube-system
	e633956403c8d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   af594200932e0       kindnet-gbt7d                          kube-system
	f07f3c1fea64d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   80f4d2924ee99       kube-proxy-6zl75                       kube-system
	cb21928f370fe       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4f50d7fbb5224       kube-controller-manager-pause-609677   kube-system
	da8fa92310de2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   c8ef35d0dc281       kube-scheduler-pause-609677            kube-system
	082f903b4adc7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   724e464cfb9f6       kube-apiserver-pause-609677            kube-system
	c60258466c3d1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   42c23535fb7b6       etcd-pause-609677                      kube-system
	
	
	==> coredns [20c5aff0e9704a0e9cf80f1bc3097b3adcc89c040fb42664c031162cc8af3eee] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60356 - 27278 "HINFO IN 2506169572643410066.3812097225966522119. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013754059s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e66556a02af29b5639690e5cd3315bd331955efbe9dba130bc12a73fb77b4cb6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60875 - 45285 "HINFO IN 196991321812913998.3637127665570046469. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013266967s
	
	
	==> describe nodes <==
	Name:               pause-609677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-609677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=pause-609677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T21_53_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 21:53:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-609677
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 21:54:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:53:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:53:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:53:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 21:54:24 +0000   Mon, 13 Oct 2025 21:54:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-609677
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebd21760060d4a39853f563197507e5d
	  System UUID:                0def1069-5034-4287-905d-8502ad76088b
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9hxkk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-609677                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-gbt7d                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-609677             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-609677    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-6zl75                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-609677             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-609677 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-609677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-609677 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node pause-609677 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node pause-609677 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s                kubelet          Node pause-609677 status is now: NodeHasSufficientPID
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           78s                node-controller  Node pause-609677 event: Registered Node pause-609677 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-609677 status is now: NodeReady
	  Warning  ContainerGCFailed        22s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           14s                node-controller  Node pause-609677 event: Registered Node pause-609677 in Controller
	
	
	==> dmesg <==
	[Oct13 21:28] overlayfs: idmapped layers are currently not supported
	[  +4.197577] overlayfs: idmapped layers are currently not supported
	[Oct13 21:29] overlayfs: idmapped layers are currently not supported
	[ +40.174368] overlayfs: idmapped layers are currently not supported
	[Oct13 21:30] hrtimer: interrupt took 51471165 ns
	[Oct13 21:31] overlayfs: idmapped layers are currently not supported
	[Oct13 21:36] overlayfs: idmapped layers are currently not supported
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c60258466c3d1703891e0584fffda2246e17fe642e25b99afaf0cb9e26934b79] <==
	{"level":"warn","ts":"2025-10-13T21:53:27.976879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.027421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.126198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.140109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.224374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.248291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:53:28.459769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T21:54:26.028785Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-13T21:54:26.028849Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-609677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-13T21:54:26.028952Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:54:26.173749Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-13T21:54:26.173847Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.173891Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-10-13T21:54:26.173916Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:54:26.173947Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-13T21:54:26.173946Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-13T21:54:26.173955Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.173958Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-13T21:54:26.174008Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-13T21:54:26.174019Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-13T21:54:26.174027Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.177296Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-13T21:54:26.177385Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-13T21:54:26.177424Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T21:54:26.177431Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-609677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [fc8e2ea74687b306f0b75c55f4977da5505971a5a74789317b3fab98a5e92f03] <==
	{"level":"warn","ts":"2025-10-13T21:54:36.797905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.819857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.830353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.845972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.866996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.876239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.899635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.913981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.931336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.961101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.976026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:36.990247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.005518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.024810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.042156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.060607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.074760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.094549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.102614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.116466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.138186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.167089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.179575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.194211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T21:54:37.260754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45564","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:54:55 up  1:37,  0 user,  load average: 2.17, 2.96, 2.42
	Linux pause-609677 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1dd85a64ce20b9dcdee25927856a7136cc2ca59b9128254415532367203a522a] <==
	I1013 21:54:34.873565       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:54:34.876225       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 21:54:34.876380       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:54:34.876402       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:54:34.876414       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:54:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:54:35.016357       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:54:35.031968       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:54:35.032064       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:54:35.033057       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 21:54:38.134080       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:54:38.134124       1 metrics.go:72] Registering metrics
	I1013 21:54:38.134176       1 controller.go:711] "Syncing nftables rules"
	I1013 21:54:45.019857       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 21:54:45.019923       1 main.go:301] handling current node
	I1013 21:54:55.019899       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 21:54:55.019942       1 main.go:301] handling current node
	
	
	==> kindnet [e633956403c8d2bcdad3ed466b8fd307d41d45bfbb6d078f6e5f72f0c194d2ee] <==
	I1013 21:53:40.026268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 21:53:40.039955       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 21:53:40.040195       1 main.go:148] setting mtu 1500 for CNI 
	I1013 21:53:40.040243       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 21:53:40.040286       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T21:53:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 21:53:40.212395       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 21:53:40.212468       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 21:53:40.212500       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 21:53:40.212638       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 21:54:10.213064       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 21:54:10.213258       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 21:54:10.213379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 21:54:10.214647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 21:54:11.813079       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 21:54:11.813111       1 metrics.go:72] Registering metrics
	I1013 21:54:11.813189       1 controller.go:711] "Syncing nftables rules"
	I1013 21:54:20.212400       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 21:54:20.212456       1 main.go:301] handling current node
	
	
	==> kube-apiserver [082f903b4adc7c39f642e460f3161aa4ba0f568fdd713a3bde9e5748752b5eb7] <==
	W1013 21:54:26.052984       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053046       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053098       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053167       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053225       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053277       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053328       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053380       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053428       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053478       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053529       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053577       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053629       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053708       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053759       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.053810       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054018       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054064       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054108       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054176       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.054951       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.055004       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.055060       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1013 21:54:26.055450       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b779d4fe4cba319c08ada9835653c4429eb4ab3cfcac3fd8b6f5055e1b826f3d] <==
	I1013 21:54:38.029565       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 21:54:38.053836       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 21:54:38.056458       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 21:54:38.058951       1 policy_source.go:240] refreshing policies
	I1013 21:54:38.059216       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 21:54:38.080891       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 21:54:38.080928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 21:54:38.081086       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 21:54:38.081960       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 21:54:38.082059       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 21:54:38.082098       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 21:54:38.095616       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 21:54:38.095799       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 21:54:38.096140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 21:54:38.104972       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 21:54:38.107464       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 21:54:38.126468       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 21:54:38.137445       1 cache.go:39] Caches are synced for autoregister controller
	E1013 21:54:38.203929       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 21:54:38.773545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 21:54:39.987730       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 21:54:41.556444       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 21:54:41.607497       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 21:54:41.655372       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 21:54:41.757133       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [23af4780ca7ffb5793e522bd1ba38bec36ec20fb0c2870aa8f742802612b1675] <==
	I1013 21:54:41.366385       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 21:54:41.367603       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 21:54:41.368784       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 21:54:41.373074       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 21:54:41.375290       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 21:54:41.377540       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 21:54:41.383849       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:54:41.383869       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:54:41.383878       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:54:41.388167       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 21:54:41.388170       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:54:41.390544       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 21:54:41.390627       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 21:54:41.390694       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-609677"
	I1013 21:54:41.390736       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 21:54:41.393501       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:54:41.394315       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 21:54:41.398899       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 21:54:41.399075       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 21:54:41.399733       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 21:54:41.399763       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 21:54:41.399798       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 21:54:41.406941       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 21:54:41.408204       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:54:41.408232       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [cb21928f370feb3f97bfbce9a8d34340e56cbc857eadc6accb9f1d851d0886c3] <==
	I1013 21:53:37.911084       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:53:37.911096       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 21:53:37.912307       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 21:53:37.916093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 21:53:37.919282       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 21:53:37.921611       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 21:53:37.921742       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 21:53:37.921799       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 21:53:37.923396       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 21:53:37.923557       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 21:53:37.923804       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 21:53:37.924740       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 21:53:37.929516       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 21:53:37.929636       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 21:53:37.929704       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 21:53:37.929741       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 21:53:37.929771       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 21:53:37.934353       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 21:53:37.935199       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 21:53:37.940144       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-609677" podCIDRs=["10.244.0.0/24"]
	I1013 21:53:37.940275       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 21:53:37.971935       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 21:53:37.972019       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 21:53:37.972053       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 21:54:22.886105       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [37fc253c25c3b33e95033f66d59c43bec2cc24937ab8865d0b893a1ca44bd78b] <==
	I1013 21:54:35.306324       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:54:35.798718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:54:38.231892       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:54:38.231940       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 21:54:38.232018       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:54:38.292157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:54:38.292267       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:54:38.307199       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:54:38.307567       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:54:38.307625       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:54:38.313431       1 config.go:200] "Starting service config controller"
	I1013 21:54:38.313461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:54:38.319929       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:54:38.319950       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:54:38.319968       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:54:38.319973       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:54:38.320646       1 config.go:309] "Starting node config controller"
	I1013 21:54:38.320698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:54:38.320727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:54:38.414412       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:54:38.420716       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 21:54:38.420818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f07f3c1fea64da214073a546691f51cbeed5401814bdfecf8c5c2d7d965b76dc] <==
	I1013 21:53:40.021413       1 server_linux.go:53] "Using iptables proxy"
	I1013 21:53:40.125827       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 21:53:40.229542       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 21:53:40.229603       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 21:53:40.229689       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 21:53:40.247580       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 21:53:40.247702       1 server_linux.go:132] "Using iptables Proxier"
	I1013 21:53:40.251026       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 21:53:40.251409       1 server.go:527] "Version info" version="v1.34.1"
	I1013 21:53:40.251580       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:53:40.252837       1 config.go:200] "Starting service config controller"
	I1013 21:53:40.252891       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 21:53:40.252937       1 config.go:106] "Starting endpoint slice config controller"
	I1013 21:53:40.252963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 21:53:40.252999       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 21:53:40.253025       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 21:53:40.253679       1 config.go:309] "Starting node config controller"
	I1013 21:53:40.253728       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 21:53:40.253755       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 21:53:40.354378       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 21:53:40.354458       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 21:53:40.354684       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035] <==
	E1013 21:53:30.197237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 21:53:30.197289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 21:53:30.197341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 21:53:30.197385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 21:53:30.197423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:53:30.197622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:53:30.197674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 21:53:30.197716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:53:30.197809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:53:30.197845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 21:53:30.197880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 21:53:31.012970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 21:53:31.025121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 21:53:31.158649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 21:53:31.172387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 21:53:31.192080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 21:53:31.237394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 21:53:31.360123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1013 21:53:33.061884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:26.030259       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1013 21:54:26.030283       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1013 21:54:26.030304       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1013 21:54:26.030332       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:26.030651       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1013 21:54:26.030671       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e813b1e3d651851055f0daeb10197c02a797ba29bdd3f3c236cb1f479151d3c7] <==
	I1013 21:54:35.625325       1 serving.go:386] Generated self-signed cert in-memory
	W1013 21:54:37.964591       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 21:54:37.964688       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 21:54:37.964739       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 21:54:37.964770       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 21:54:38.152360       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 21:54:38.152444       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 21:54:38.154597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:38.161050       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 21:54:38.161988       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 21:54:38.162059       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 21:54:38.261231       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.717763    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15c7bbf70c5a0488c39e702688025350" pod="kube-system/kube-controller-manager-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.718164    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zl75\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e" pod="kube-system/kube-proxy-6zl75"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.719226    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbt7d\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="41ae05d3-8177-4a09-8617-d9c26c154582" pod="kube-system/kindnet-gbt7d"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.719586    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9hxkk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22b9d94d-f872-48ad-a5fa-77a5bd5186d1" pod="kube-system/coredns-66bc5c9577-9hxkk"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.719965    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="001e476ef89ab0bf83975cc494875b98" pod="kube-system/kube-scheduler-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.723279    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bbc8caca6b8b0f86abc0592289625533" pod="kube-system/etcd-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.723679    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f070d7ef2b4970a3dd205675fce8604a" pod="kube-system/kube-apiserver-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: I1013 21:54:34.739007    1311 scope.go:117] "RemoveContainer" containerID="da8fa92310de218e142725844a91bcbbb0d5395e7f060fbf8775c56dcadde035"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.739520    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bbc8caca6b8b0f86abc0592289625533" pod="kube-system/etcd-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.739716    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f070d7ef2b4970a3dd205675fce8604a" pod="kube-system/kube-apiserver-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.739985    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="15c7bbf70c5a0488c39e702688025350" pod="kube-system/kube-controller-manager-pause-609677"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740177    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6zl75\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ae7ccd7e-e862-4dad-9fdf-c049be8b6d2e" pod="kube-system/kube-proxy-6zl75"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740407    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gbt7d\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="41ae05d3-8177-4a09-8617-d9c26c154582" pod="kube-system/kindnet-gbt7d"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740611    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9hxkk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="22b9d94d-f872-48ad-a5fa-77a5bd5186d1" pod="kube-system/coredns-66bc5c9577-9hxkk"
	Oct 13 21:54:34 pause-609677 kubelet[1311]: E1013 21:54:34.740823    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-609677\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="001e476ef89ab0bf83975cc494875b98" pod="kube-system/kube-scheduler-pause-609677"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.816257    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="001e476ef89ab0bf83975cc494875b98" pod="kube-system/kube-scheduler-pause-609677"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.832066    1311 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-609677\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.832262    1311 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-609677\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.919197    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="bbc8caca6b8b0f86abc0592289625533" pod="kube-system/etcd-pause-609677"
	Oct 13 21:54:37 pause-609677 kubelet[1311]: E1013 21:54:37.964107    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="f070d7ef2b4970a3dd205675fce8604a" pod="kube-system/kube-apiserver-pause-609677"
	Oct 13 21:54:38 pause-609677 kubelet[1311]: E1013 21:54:38.035759    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-609677\" is forbidden: User \"system:node:pause-609677\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-609677' and this object" podUID="15c7bbf70c5a0488c39e702688025350" pod="kube-system/kube-controller-manager-pause-609677"
	Oct 13 21:54:43 pause-609677 kubelet[1311]: W1013 21:54:43.699443    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 13 21:54:51 pause-609677 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 21:54:51 pause-609677 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 21:54:51 pause-609677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-609677 -n pause-609677
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-609677 -n pause-609677: exit status 2 (356.31662ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-609677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (285.15916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:05:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-061725 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-061725 describe deploy/metrics-server -n kube-system: exit status 1 (90.361994ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-061725 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-061725
helpers_test.go:243: (dbg) docker inspect old-k8s-version-061725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041",
	        "Created": "2025-10-13T22:04:24.643297678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 179203,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:04:24.722243952Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/hosts",
	        "LogPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041-json.log",
	        "Name": "/old-k8s-version-061725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-061725:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-061725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041",
	                "LowerDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-061725",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-061725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-061725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-061725",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-061725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "673bd0efbb54c8a15094a3043afbc01cbb8d04f7a54b4c7c5262f37d3e98dcf5",
	            "SandboxKey": "/var/run/docker/netns/673bd0efbb54",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-061725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:36:87:1e:22:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "342c36a433557ef5e18c5eb6a5e2eade730d4334bff3f113c0f457eda67e9161",
	                    "EndpointID": "e1abc671118bb741016c86a6472b035132a87d823569ddf4ff5024e74d858156",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-061725",
	                        "9b67329f891f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-061725 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-061725 logs -n 25: (1.15321028s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-122822 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo containerd config dump                                                                                                                                                                                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo crio config                                                                                                                                                                                                             │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ delete  │ -p cilium-122822                                                                                                                                                                                                                              │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │ 13 Oct 25 21:55 UTC │
	│ start   │ -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ force-systemd-flag-257205 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-flag-257205                                                                                                                                                                                                                  │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-env-312094                                                                                                                                                                                                                   │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-options-194931 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ cert-options-194931 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p cert-options-194931 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-options-194931                                                                                                                                                                                                                        │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:04:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:04:17.836327  178815 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:04:17.836465  178815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:17.836476  178815 out.go:374] Setting ErrFile to fd 2...
	I1013 22:04:17.836480  178815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:04:17.836738  178815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:04:17.837266  178815 out.go:368] Setting JSON to false
	I1013 22:04:17.838152  178815 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6392,"bootTime":1760386666,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:04:17.838278  178815 start.go:141] virtualization:  
	I1013 22:04:17.844220  178815 out.go:179] * [old-k8s-version-061725] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:04:17.847688  178815 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:04:17.847752  178815 notify.go:220] Checking for updates...
	I1013 22:04:17.854221  178815 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:04:17.857594  178815 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:04:17.860734  178815 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:04:17.863907  178815 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:04:17.867010  178815 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:04:17.870573  178815 config.go:182] Loaded profile config "cert-expiration-546667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:04:17.870687  178815 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:04:17.897001  178815 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:04:17.897201  178815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:17.972076  178815 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:04:17.962134834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:04:17.972185  178815 docker.go:318] overlay module found
	I1013 22:04:17.977363  178815 out.go:179] * Using the docker driver based on user configuration
	I1013 22:04:17.980265  178815 start.go:305] selected driver: docker
	I1013 22:04:17.980305  178815 start.go:925] validating driver "docker" against <nil>
	I1013 22:04:17.980320  178815 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:04:17.981152  178815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:04:18.058577  178815 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:04:18.041415969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:04:18.058736  178815 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:04:18.058975  178815 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:04:18.062073  178815 out.go:179] * Using Docker driver with root privileges
	I1013 22:04:18.065033  178815 cni.go:84] Creating CNI manager for ""
	I1013 22:04:18.065137  178815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:18.065156  178815 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:04:18.065259  178815 start.go:349] cluster config:
	{Name:old-k8s-version-061725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-061725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:18.070257  178815 out.go:179] * Starting "old-k8s-version-061725" primary control-plane node in "old-k8s-version-061725" cluster
	I1013 22:04:18.073221  178815 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:04:18.076291  178815 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:04:18.079224  178815 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 22:04:18.079262  178815 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:04:18.079311  178815 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1013 22:04:18.079353  178815 cache.go:58] Caching tarball of preloaded images
	I1013 22:04:18.079485  178815 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:04:18.079499  178815 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1013 22:04:18.079818  178815 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/config.json ...
	I1013 22:04:18.079860  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/config.json: {Name:mk5231faf2eab840533760ff9d1cfc5c4bae9972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:18.099862  178815 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:04:18.099888  178815 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:04:18.099901  178815 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:04:18.099924  178815 start.go:360] acquireMachinesLock for old-k8s-version-061725: {Name:mk85f4e63a49e9b332b4abe1ac67e5d46243b584 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:04:18.100044  178815 start.go:364] duration metric: took 97.179µs to acquireMachinesLock for "old-k8s-version-061725"
	I1013 22:04:18.100076  178815 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-061725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-061725 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:04:18.100154  178815 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:04:18.105509  178815 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:04:18.105776  178815 start.go:159] libmachine.API.Create for "old-k8s-version-061725" (driver="docker")
	I1013 22:04:18.105833  178815 client.go:168] LocalClient.Create starting
	I1013 22:04:18.105907  178815 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:04:18.105950  178815 main.go:141] libmachine: Decoding PEM data...
	I1013 22:04:18.105979  178815 main.go:141] libmachine: Parsing certificate...
	I1013 22:04:18.106045  178815 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:04:18.106070  178815 main.go:141] libmachine: Decoding PEM data...
	I1013 22:04:18.106088  178815 main.go:141] libmachine: Parsing certificate...
	I1013 22:04:18.106467  178815 cli_runner.go:164] Run: docker network inspect old-k8s-version-061725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:04:18.124449  178815 cli_runner.go:211] docker network inspect old-k8s-version-061725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:04:18.124533  178815 network_create.go:284] running [docker network inspect old-k8s-version-061725] to gather additional debugging logs...
	I1013 22:04:18.124558  178815 cli_runner.go:164] Run: docker network inspect old-k8s-version-061725
	W1013 22:04:18.142222  178815 cli_runner.go:211] docker network inspect old-k8s-version-061725 returned with exit code 1
	I1013 22:04:18.142256  178815 network_create.go:287] error running [docker network inspect old-k8s-version-061725]: docker network inspect old-k8s-version-061725: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-061725 not found
	I1013 22:04:18.142271  178815 network_create.go:289] output of [docker network inspect old-k8s-version-061725]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-061725 not found
	
	** /stderr **
	I1013 22:04:18.142385  178815 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:04:18.159515  178815 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:04:18.159956  178815 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:04:18.160327  178815 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:04:18.160584  178815 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-24584988cd46 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:b6:dc:d1:17:75} reservation:<nil>}
	I1013 22:04:18.160975  178815 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a03800}
	I1013 22:04:18.160996  178815 network_create.go:124] attempt to create docker network old-k8s-version-061725 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:04:18.161059  178815 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-061725 old-k8s-version-061725
	I1013 22:04:18.226585  178815 network_create.go:108] docker network old-k8s-version-061725 192.168.85.0/24 created
	I1013 22:04:18.226621  178815 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-061725" container
	I1013 22:04:18.226729  178815 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:04:18.243861  178815 cli_runner.go:164] Run: docker volume create old-k8s-version-061725 --label name.minikube.sigs.k8s.io=old-k8s-version-061725 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:04:18.263754  178815 oci.go:103] Successfully created a docker volume old-k8s-version-061725
	I1013 22:04:18.263871  178815 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-061725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-061725 --entrypoint /usr/bin/test -v old-k8s-version-061725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:04:18.783629  178815 oci.go:107] Successfully prepared a docker volume old-k8s-version-061725
	I1013 22:04:18.783681  178815 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 22:04:18.783700  178815 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:04:18.783858  178815 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-061725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:04:24.565925  178815 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-061725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.782022541s)
	I1013 22:04:24.565953  178815 kic.go:203] duration metric: took 5.782250408s to extract preloaded images to volume ...
	W1013 22:04:24.566095  178815 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:04:24.566214  178815 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:04:24.628116  178815 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-061725 --name old-k8s-version-061725 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-061725 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-061725 --network old-k8s-version-061725 --ip 192.168.85.2 --volume old-k8s-version-061725:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:04:24.959176  178815 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Running}}
	I1013 22:04:24.983383  178815 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Status}}
	I1013 22:04:25.006204  178815 cli_runner.go:164] Run: docker exec old-k8s-version-061725 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:04:25.070616  178815 oci.go:144] the created container "old-k8s-version-061725" has a running status.
	I1013 22:04:25.070641  178815 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa...
	I1013 22:04:25.647407  178815 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:04:25.666091  178815 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Status}}
	I1013 22:04:25.683241  178815 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:04:25.683265  178815 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-061725 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:04:25.721054  178815 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Status}}
	I1013 22:04:25.739279  178815 machine.go:93] provisionDockerMachine start ...
	I1013 22:04:25.739379  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:25.755972  178815 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:25.756317  178815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1013 22:04:25.756332  178815 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:04:25.756898  178815 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52196->127.0.0.1:33051: read: connection reset by peer
	I1013 22:04:28.899215  178815 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061725
	
	I1013 22:04:28.899241  178815 ubuntu.go:182] provisioning hostname "old-k8s-version-061725"
	I1013 22:04:28.899310  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:28.916296  178815 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:28.916600  178815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1013 22:04:28.916618  178815 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-061725 && echo "old-k8s-version-061725" | sudo tee /etc/hostname
	I1013 22:04:29.079579  178815 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061725
	
	I1013 22:04:29.079654  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:29.097870  178815 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:29.098195  178815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1013 22:04:29.098217  178815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-061725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-061725/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-061725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:04:29.243959  178815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:04:29.243999  178815 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:04:29.244020  178815 ubuntu.go:190] setting up certificates
	I1013 22:04:29.244031  178815 provision.go:84] configureAuth start
	I1013 22:04:29.244090  178815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-061725
	I1013 22:04:29.261819  178815 provision.go:143] copyHostCerts
	I1013 22:04:29.261891  178815 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:04:29.261900  178815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:04:29.261979  178815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:04:29.262086  178815 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:04:29.262100  178815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:04:29.262131  178815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:04:29.262196  178815 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:04:29.262207  178815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:04:29.262232  178815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:04:29.262319  178815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-061725 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-061725]
	I1013 22:04:29.496424  178815 provision.go:177] copyRemoteCerts
	I1013 22:04:29.496494  178815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:04:29.496535  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:29.514796  178815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:04:29.619748  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:04:29.637834  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1013 22:04:29.655357  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:04:29.673194  178815 provision.go:87] duration metric: took 429.125636ms to configureAuth
	I1013 22:04:29.673218  178815 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:04:29.673426  178815 config.go:182] Loaded profile config "old-k8s-version-061725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:04:29.673543  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:29.689896  178815 main.go:141] libmachine: Using SSH client type: native
	I1013 22:04:29.690190  178815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1013 22:04:29.690206  178815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:04:29.953963  178815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:04:29.954047  178815 machine.go:96] duration metric: took 4.214747496s to provisionDockerMachine
	I1013 22:04:29.954071  178815 client.go:171] duration metric: took 11.848226197s to LocalClient.Create
	I1013 22:04:29.954117  178815 start.go:167] duration metric: took 11.848342486s to libmachine.API.Create "old-k8s-version-061725"
	I1013 22:04:29.954142  178815 start.go:293] postStartSetup for "old-k8s-version-061725" (driver="docker")
	I1013 22:04:29.954170  178815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:04:29.954256  178815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:04:29.954328  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:29.972432  178815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:04:30.089114  178815 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:04:30.092946  178815 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:04:30.092973  178815 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:04:30.092985  178815 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:04:30.093050  178815 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:04:30.093139  178815 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:04:30.093266  178815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:04:30.102038  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:04:30.122555  178815 start.go:296] duration metric: took 168.384629ms for postStartSetup
	I1013 22:04:30.122968  178815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-061725
	I1013 22:04:30.142500  178815 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/config.json ...
	I1013 22:04:30.142868  178815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:04:30.142921  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:30.159609  178815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:04:30.261774  178815 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:04:30.266891  178815 start.go:128] duration metric: took 12.166720869s to createHost
	I1013 22:04:30.266913  178815 start.go:83] releasing machines lock for "old-k8s-version-061725", held for 12.166854823s
	I1013 22:04:30.266992  178815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-061725
	I1013 22:04:30.283435  178815 ssh_runner.go:195] Run: cat /version.json
	I1013 22:04:30.283495  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:30.283443  178815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:04:30.283632  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:04:30.303961  178815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:04:30.309498  178815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:04:30.500013  178815 ssh_runner.go:195] Run: systemctl --version
	I1013 22:04:30.506489  178815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:04:30.543097  178815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:04:30.548538  178815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:04:30.548612  178815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:04:30.578074  178815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:04:30.578096  178815 start.go:495] detecting cgroup driver to use...
	I1013 22:04:30.578129  178815 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:04:30.578181  178815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:04:30.596057  178815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:04:30.609437  178815 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:04:30.609554  178815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:04:30.627707  178815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:04:30.647030  178815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:04:30.759440  178815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:04:30.888362  178815 docker.go:234] disabling docker service ...
	I1013 22:04:30.888441  178815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:04:30.910689  178815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:04:30.924100  178815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:04:31.065049  178815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:04:31.182806  178815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:04:31.196686  178815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:04:31.210775  178815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1013 22:04:31.210847  178815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:04:31.220416  178815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:04:31.220493  178815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:04:31.230993  178815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:04:31.240426  178815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:04:31.249725  178815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:04:31.257835  178815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:04:31.267378  178815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:04:31.281388  178815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:04:31.296615  178815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:04:31.304648  178815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:04:31.312728  178815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:04:31.419575  178815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:04:31.547674  178815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:04:31.547746  178815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:04:31.551665  178815 start.go:563] Will wait 60s for crictl version
	I1013 22:04:31.551821  178815 ssh_runner.go:195] Run: which crictl
	I1013 22:04:31.555700  178815 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:04:31.581150  178815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:04:31.581306  178815 ssh_runner.go:195] Run: crio --version
	I1013 22:04:31.615677  178815 ssh_runner.go:195] Run: crio --version
	I1013 22:04:31.648252  178815 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1013 22:04:31.651131  178815 cli_runner.go:164] Run: docker network inspect old-k8s-version-061725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:04:31.666757  178815 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:04:31.670797  178815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:04:31.680860  178815 kubeadm.go:883] updating cluster {Name:old-k8s-version-061725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-061725 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:04:31.680986  178815 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 22:04:31.681046  178815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:04:31.716348  178815 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:04:31.716372  178815 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:04:31.716428  178815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:04:31.743033  178815 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:04:31.743056  178815 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:04:31.743064  178815 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1013 22:04:31.743179  178815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-061725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-061725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:04:31.743276  178815 ssh_runner.go:195] Run: crio config
	I1013 22:04:31.826438  178815 cni.go:84] Creating CNI manager for ""
	I1013 22:04:31.826515  178815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:31.826556  178815 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:04:31.826601  178815 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-061725 NodeName:old-k8s-version-061725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:04:31.826760  178815 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-061725"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:04:31.826853  178815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1013 22:04:31.834806  178815 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:04:31.834930  178815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:04:31.843394  178815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1013 22:04:31.856739  178815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:04:31.870016  178815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1013 22:04:31.883253  178815 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:04:31.887084  178815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:04:31.897193  178815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:04:32.023475  178815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:04:32.041252  178815 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725 for IP: 192.168.85.2
	I1013 22:04:32.041326  178815 certs.go:195] generating shared ca certs ...
	I1013 22:04:32.041356  178815 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:32.041519  178815 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:04:32.041596  178815 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:04:32.041631  178815 certs.go:257] generating profile certs ...
	I1013 22:04:32.041707  178815 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.key
	I1013 22:04:32.041747  178815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt with IP's: []
	I1013 22:04:32.476631  178815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt ...
	I1013 22:04:32.476664  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: {Name:mkfcf42147fd3171e549388632436057f35f80c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:32.476898  178815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.key ...
	I1013 22:04:32.476916  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.key: {Name:mk97d94e25bb2862acf386ea05ed89a34e1abb32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:32.477012  178815 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.key.6e782913
	I1013 22:04:32.477034  178815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.crt.6e782913 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:04:33.024621  178815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.crt.6e782913 ...
	I1013 22:04:33.024653  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.crt.6e782913: {Name:mk961627ef6e04a316450399a50bbef66c71f0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:33.024870  178815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.key.6e782913 ...
	I1013 22:04:33.024886  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.key.6e782913: {Name:mk85c3e0f5dbcfa022cfbf8cad03b0aa8b11260e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:33.024986  178815 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.crt.6e782913 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.crt
	I1013 22:04:33.025079  178815 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.key.6e782913 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.key
	I1013 22:04:33.025151  178815 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.key
	I1013 22:04:33.025173  178815 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.crt with IP's: []
	I1013 22:04:33.670685  178815 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.crt ...
	I1013 22:04:33.670725  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.crt: {Name:mk6e0b958f1ca81f0039867e519fbbb0d713632b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:33.670918  178815 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.key ...
	I1013 22:04:33.670933  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.key: {Name:mk59b8e231f57b4603c6d9d950fddcfa88453db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:04:33.671128  178815 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:04:33.671176  178815 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:04:33.671191  178815 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:04:33.671216  178815 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:04:33.671245  178815 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:04:33.671271  178815 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:04:33.671317  178815 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:04:33.671980  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:04:33.692886  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:04:33.712320  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:04:33.732481  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:04:33.753406  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1013 22:04:33.773707  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:04:33.792275  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:04:33.810270  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:04:33.827666  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:04:33.846204  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:04:33.863430  178815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:04:33.880736  178815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:04:33.893857  178815 ssh_runner.go:195] Run: openssl version
	I1013 22:04:33.899876  178815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:04:33.907725  178815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:04:33.911103  178815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:04:33.911204  178815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:04:33.953325  178815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:04:33.961543  178815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:04:33.971457  178815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:04:33.976345  178815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:04:33.976432  178815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:04:34.017335  178815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:04:34.026587  178815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:04:34.035664  178815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:04:34.039521  178815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:04:34.039585  178815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:04:34.081852  178815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:04:34.090707  178815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:04:34.094763  178815 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:04:34.094822  178815 kubeadm.go:400] StartCluster: {Name:old-k8s-version-061725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-061725 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:04:34.094941  178815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:04:34.095024  178815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:04:34.124143  178815 cri.go:89] found id: ""
	I1013 22:04:34.124259  178815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:04:34.132606  178815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:04:34.141143  178815 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:04:34.141220  178815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:04:34.150035  178815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:04:34.150064  178815 kubeadm.go:157] found existing configuration files:
	
	I1013 22:04:34.150128  178815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:04:34.158530  178815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:04:34.158630  178815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:04:34.166357  178815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:04:34.174629  178815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:04:34.174692  178815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:04:34.182737  178815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:04:34.190843  178815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:04:34.190913  178815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:04:34.198708  178815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:04:34.206482  178815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:04:34.206579  178815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:04:34.214156  178815 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:04:34.260559  178815 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1013 22:04:34.260639  178815 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:04:34.301861  178815 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:04:34.301937  178815 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:04:34.301980  178815 kubeadm.go:318] OS: Linux
	I1013 22:04:34.302049  178815 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:04:34.302103  178815 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:04:34.302157  178815 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:04:34.302212  178815 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:04:34.302266  178815 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:04:34.302320  178815 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:04:34.302370  178815 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:04:34.302424  178815 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:04:34.302476  178815 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:04:34.383033  178815 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:04:34.383152  178815 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:04:34.383257  178815 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1013 22:04:34.551499  178815 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:04:34.554901  178815 out.go:252]   - Generating certificates and keys ...
	I1013 22:04:34.554994  178815 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:04:34.555067  178815 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:04:34.919197  178815 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:04:35.280329  178815 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:04:35.496106  178815 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:04:36.402573  178815 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:04:37.439164  178815 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:04:37.439722  178815 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-061725] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:04:38.341142  178815 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:04:38.341297  178815 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-061725] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:04:39.393434  178815 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:04:39.628183  178815 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:04:40.014731  178815 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:04:40.015088  178815 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:04:40.327667  178815 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:04:40.574039  178815 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:04:40.796078  178815 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:04:41.357444  178815 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:04:41.358301  178815 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:04:41.360877  178815 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:04:41.364468  178815 out.go:252]   - Booting up control plane ...
	I1013 22:04:41.364589  178815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:04:41.364691  178815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:04:41.365351  178815 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:04:41.383832  178815 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:04:41.384797  178815 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:04:41.385068  178815 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:04:41.524420  178815 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1013 22:04:48.527894  178815 kubeadm.go:318] [apiclient] All control plane components are healthy after 7.004810 seconds
	I1013 22:04:48.528043  178815 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:04:48.545425  178815 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:04:49.075642  178815 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:04:49.075887  178815 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-061725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:04:49.588317  178815 kubeadm.go:318] [bootstrap-token] Using token: fa4uam.1i5rquycewbtt4v3
	I1013 22:04:49.591314  178815 out.go:252]   - Configuring RBAC rules ...
	I1013 22:04:49.591441  178815 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:04:49.601274  178815 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:04:49.609896  178815 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:04:49.616631  178815 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:04:49.620800  178815 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:04:49.624794  178815 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:04:49.639432  178815 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:04:49.935874  178815 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:04:50.044793  178815 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:04:50.046084  178815 kubeadm.go:318] 
	I1013 22:04:50.046162  178815 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:04:50.046168  178815 kubeadm.go:318] 
	I1013 22:04:50.046249  178815 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:04:50.046254  178815 kubeadm.go:318] 
	I1013 22:04:50.046281  178815 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:04:50.046342  178815 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:04:50.046395  178815 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:04:50.046399  178815 kubeadm.go:318] 
	I1013 22:04:50.046456  178815 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:04:50.046461  178815 kubeadm.go:318] 
	I1013 22:04:50.046511  178815 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:04:50.046516  178815 kubeadm.go:318] 
	I1013 22:04:50.046571  178815 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:04:50.046649  178815 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:04:50.046720  178815 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:04:50.046725  178815 kubeadm.go:318] 
	I1013 22:04:50.046826  178815 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:04:50.046907  178815 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:04:50.046913  178815 kubeadm.go:318] 
	I1013 22:04:50.047020  178815 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fa4uam.1i5rquycewbtt4v3 \
	I1013 22:04:50.047130  178815 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:04:50.047151  178815 kubeadm.go:318] 	--control-plane 
	I1013 22:04:50.047156  178815 kubeadm.go:318] 
	I1013 22:04:50.047245  178815 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:04:50.047250  178815 kubeadm.go:318] 
	I1013 22:04:50.047335  178815 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fa4uam.1i5rquycewbtt4v3 \
	I1013 22:04:50.047449  178815 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:04:50.051707  178815 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:04:50.051976  178815 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:04:50.052038  178815 cni.go:84] Creating CNI manager for ""
	I1013 22:04:50.052060  178815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:04:50.057047  178815 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:04:50.059934  178815 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:04:50.069648  178815 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1013 22:04:50.069671  178815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:04:50.088362  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:04:51.157312  178815 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.06891501s)
	I1013 22:04:51.157356  178815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:04:51.157499  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:51.157586  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-061725 minikube.k8s.io/updated_at=2025_10_13T22_04_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=old-k8s-version-061725 minikube.k8s.io/primary=true
	I1013 22:04:51.359684  178815 ops.go:34] apiserver oom_adj: -16
	I1013 22:04:51.359828  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:51.860232  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:52.360570  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:52.860253  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:53.360797  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:53.860013  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:54.360854  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:54.860402  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:55.359912  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:55.860412  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:56.359913  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:56.859960  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:57.360448  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:57.860886  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:58.359865  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:58.860516  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:59.359859  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:04:59.859994  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:00.363088  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:00.860837  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:01.360482  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:01.860425  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:02.360730  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:02.860288  178815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:05:02.994147  178815 kubeadm.go:1113] duration metric: took 11.836698774s to wait for elevateKubeSystemPrivileges
	I1013 22:05:02.994185  178815 kubeadm.go:402] duration metric: took 28.899359304s to StartCluster
	I1013 22:05:02.994202  178815 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:02.994261  178815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:05:02.995280  178815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:05:02.995492  178815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:05:02.995589  178815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:05:02.995864  178815 config.go:182] Loaded profile config "old-k8s-version-061725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:05:02.995902  178815 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:05:02.995970  178815 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-061725"
	I1013 22:05:02.995993  178815 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-061725"
	I1013 22:05:02.996017  178815 host.go:66] Checking if "old-k8s-version-061725" exists ...
	I1013 22:05:02.996553  178815 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Status}}
	I1013 22:05:02.996787  178815 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-061725"
	I1013 22:05:02.996808  178815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-061725"
	I1013 22:05:02.997091  178815 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Status}}
	I1013 22:05:02.998737  178815 out.go:179] * Verifying Kubernetes components...
	I1013 22:05:03.001691  178815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:05:03.041798  178815 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-061725"
	I1013 22:05:03.041843  178815 host.go:66] Checking if "old-k8s-version-061725" exists ...
	I1013 22:05:03.042251  178815 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Status}}
	I1013 22:05:03.049569  178815 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:05:03.052509  178815 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:03.052534  178815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:05:03.052606  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:05:03.067487  178815 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:03.067508  178815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:05:03.067566  178815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:05:03.091021  178815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:05:03.100430  178815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:05:03.291333  178815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:05:03.350212  178815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:05:03.358712  178815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:05:03.358880  178815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:05:04.673988  178815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.323742656s)
	I1013 22:05:04.674080  178815 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.315185748s)
	I1013 22:05:04.674142  178815 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 22:05:04.674093  178815 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.315364008s)
	I1013 22:05:04.675887  178815 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-061725" to be "Ready" ...
	I1013 22:05:04.681485  178815 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1013 22:05:04.685052  178815 addons.go:514] duration metric: took 1.689144798s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1013 22:05:05.179694  178815 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-061725" context rescaled to 1 replicas
	W1013 22:05:06.679296  178815 node_ready.go:57] node "old-k8s-version-061725" has "Ready":"False" status (will retry)
	W1013 22:05:08.679422  178815 node_ready.go:57] node "old-k8s-version-061725" has "Ready":"False" status (will retry)
	W1013 22:05:10.679478  178815 node_ready.go:57] node "old-k8s-version-061725" has "Ready":"False" status (will retry)
	W1013 22:05:13.179023  178815 node_ready.go:57] node "old-k8s-version-061725" has "Ready":"False" status (will retry)
	W1013 22:05:15.179625  178815 node_ready.go:57] node "old-k8s-version-061725" has "Ready":"False" status (will retry)
	I1013 22:05:17.178983  178815 node_ready.go:49] node "old-k8s-version-061725" is "Ready"
	I1013 22:05:17.179013  178815 node_ready.go:38] duration metric: took 12.503089944s for node "old-k8s-version-061725" to be "Ready" ...
	I1013 22:05:17.179027  178815 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:05:17.179086  178815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:05:17.190539  178815 api_server.go:72] duration metric: took 14.195012912s to wait for apiserver process to appear ...
	I1013 22:05:17.190564  178815 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:05:17.190582  178815 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:05:17.199256  178815 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:05:17.200689  178815 api_server.go:141] control plane version: v1.28.0
	I1013 22:05:17.200715  178815 api_server.go:131] duration metric: took 10.144141ms to wait for apiserver health ...
	I1013 22:05:17.200724  178815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:05:17.203969  178815 system_pods.go:59] 8 kube-system pods found
	I1013 22:05:17.203998  178815 system_pods.go:61] "coredns-5dd5756b68-6k2fk" [c1ae429b-61b6-4c93-8de5-ceef5fad5f55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:17.204004  178815 system_pods.go:61] "etcd-old-k8s-version-061725" [6589077c-d559-4773-b435-26d82965b299] Running
	I1013 22:05:17.204010  178815 system_pods.go:61] "kindnet-8j8n7" [635ce300-372b-48da-b8ea-5fceaf8b6add] Running
	I1013 22:05:17.204014  178815 system_pods.go:61] "kube-apiserver-old-k8s-version-061725" [f9a32dcc-67c3-4c39-a136-4e5bf1e802e8] Running
	I1013 22:05:17.204019  178815 system_pods.go:61] "kube-controller-manager-old-k8s-version-061725" [58899ecf-40cb-46e4-a680-d40ffebea3f8] Running
	I1013 22:05:17.204023  178815 system_pods.go:61] "kube-proxy-kglxn" [046c9623-16c1-4968-a733-8f25a8601930] Running
	I1013 22:05:17.204027  178815 system_pods.go:61] "kube-scheduler-old-k8s-version-061725" [7320e402-46f1-42bf-a89b-3ff685c76155] Running
	I1013 22:05:17.204033  178815 system_pods.go:61] "storage-provisioner" [47d1825a-9ebe-4730-b56d-677a008d0099] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:17.204040  178815 system_pods.go:74] duration metric: took 3.309826ms to wait for pod list to return data ...
	I1013 22:05:17.204064  178815 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:05:17.206309  178815 default_sa.go:45] found service account: "default"
	I1013 22:05:17.206334  178815 default_sa.go:55] duration metric: took 2.263271ms for default service account to be created ...
	I1013 22:05:17.206343  178815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:05:17.209510  178815 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:17.209584  178815 system_pods.go:89] "coredns-5dd5756b68-6k2fk" [c1ae429b-61b6-4c93-8de5-ceef5fad5f55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:17.209596  178815 system_pods.go:89] "etcd-old-k8s-version-061725" [6589077c-d559-4773-b435-26d82965b299] Running
	I1013 22:05:17.209606  178815 system_pods.go:89] "kindnet-8j8n7" [635ce300-372b-48da-b8ea-5fceaf8b6add] Running
	I1013 22:05:17.209611  178815 system_pods.go:89] "kube-apiserver-old-k8s-version-061725" [f9a32dcc-67c3-4c39-a136-4e5bf1e802e8] Running
	I1013 22:05:17.209617  178815 system_pods.go:89] "kube-controller-manager-old-k8s-version-061725" [58899ecf-40cb-46e4-a680-d40ffebea3f8] Running
	I1013 22:05:17.209622  178815 system_pods.go:89] "kube-proxy-kglxn" [046c9623-16c1-4968-a733-8f25a8601930] Running
	I1013 22:05:17.209630  178815 system_pods.go:89] "kube-scheduler-old-k8s-version-061725" [7320e402-46f1-42bf-a89b-3ff685c76155] Running
	I1013 22:05:17.209636  178815 system_pods.go:89] "storage-provisioner" [47d1825a-9ebe-4730-b56d-677a008d0099] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:17.209662  178815 retry.go:31] will retry after 231.740597ms: missing components: kube-dns
	I1013 22:05:17.449062  178815 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:17.449097  178815 system_pods.go:89] "coredns-5dd5756b68-6k2fk" [c1ae429b-61b6-4c93-8de5-ceef5fad5f55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:17.449105  178815 system_pods.go:89] "etcd-old-k8s-version-061725" [6589077c-d559-4773-b435-26d82965b299] Running
	I1013 22:05:17.449111  178815 system_pods.go:89] "kindnet-8j8n7" [635ce300-372b-48da-b8ea-5fceaf8b6add] Running
	I1013 22:05:17.449116  178815 system_pods.go:89] "kube-apiserver-old-k8s-version-061725" [f9a32dcc-67c3-4c39-a136-4e5bf1e802e8] Running
	I1013 22:05:17.449120  178815 system_pods.go:89] "kube-controller-manager-old-k8s-version-061725" [58899ecf-40cb-46e4-a680-d40ffebea3f8] Running
	I1013 22:05:17.449127  178815 system_pods.go:89] "kube-proxy-kglxn" [046c9623-16c1-4968-a733-8f25a8601930] Running
	I1013 22:05:17.449132  178815 system_pods.go:89] "kube-scheduler-old-k8s-version-061725" [7320e402-46f1-42bf-a89b-3ff685c76155] Running
	I1013 22:05:17.449158  178815 system_pods.go:89] "storage-provisioner" [47d1825a-9ebe-4730-b56d-677a008d0099] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:17.449178  178815 retry.go:31] will retry after 319.087384ms: missing components: kube-dns
	I1013 22:05:17.772255  178815 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:17.772290  178815 system_pods.go:89] "coredns-5dd5756b68-6k2fk" [c1ae429b-61b6-4c93-8de5-ceef5fad5f55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:17.772298  178815 system_pods.go:89] "etcd-old-k8s-version-061725" [6589077c-d559-4773-b435-26d82965b299] Running
	I1013 22:05:17.772304  178815 system_pods.go:89] "kindnet-8j8n7" [635ce300-372b-48da-b8ea-5fceaf8b6add] Running
	I1013 22:05:17.772328  178815 system_pods.go:89] "kube-apiserver-old-k8s-version-061725" [f9a32dcc-67c3-4c39-a136-4e5bf1e802e8] Running
	I1013 22:05:17.772338  178815 system_pods.go:89] "kube-controller-manager-old-k8s-version-061725" [58899ecf-40cb-46e4-a680-d40ffebea3f8] Running
	I1013 22:05:17.772342  178815 system_pods.go:89] "kube-proxy-kglxn" [046c9623-16c1-4968-a733-8f25a8601930] Running
	I1013 22:05:17.772348  178815 system_pods.go:89] "kube-scheduler-old-k8s-version-061725" [7320e402-46f1-42bf-a89b-3ff685c76155] Running
	I1013 22:05:17.772354  178815 system_pods.go:89] "storage-provisioner" [47d1825a-9ebe-4730-b56d-677a008d0099] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:17.772370  178815 retry.go:31] will retry after 440.492378ms: missing components: kube-dns
	I1013 22:05:18.217853  178815 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:18.217885  178815 system_pods.go:89] "coredns-5dd5756b68-6k2fk" [c1ae429b-61b6-4c93-8de5-ceef5fad5f55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:05:18.217893  178815 system_pods.go:89] "etcd-old-k8s-version-061725" [6589077c-d559-4773-b435-26d82965b299] Running
	I1013 22:05:18.217900  178815 system_pods.go:89] "kindnet-8j8n7" [635ce300-372b-48da-b8ea-5fceaf8b6add] Running
	I1013 22:05:18.217905  178815 system_pods.go:89] "kube-apiserver-old-k8s-version-061725" [f9a32dcc-67c3-4c39-a136-4e5bf1e802e8] Running
	I1013 22:05:18.217910  178815 system_pods.go:89] "kube-controller-manager-old-k8s-version-061725" [58899ecf-40cb-46e4-a680-d40ffebea3f8] Running
	I1013 22:05:18.217915  178815 system_pods.go:89] "kube-proxy-kglxn" [046c9623-16c1-4968-a733-8f25a8601930] Running
	I1013 22:05:18.217925  178815 system_pods.go:89] "kube-scheduler-old-k8s-version-061725" [7320e402-46f1-42bf-a89b-3ff685c76155] Running
	I1013 22:05:18.217931  178815 system_pods.go:89] "storage-provisioner" [47d1825a-9ebe-4730-b56d-677a008d0099] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:05:18.217952  178815 retry.go:31] will retry after 424.38059ms: missing components: kube-dns
	I1013 22:05:18.646707  178815 system_pods.go:86] 8 kube-system pods found
	I1013 22:05:18.646738  178815 system_pods.go:89] "coredns-5dd5756b68-6k2fk" [c1ae429b-61b6-4c93-8de5-ceef5fad5f55] Running
	I1013 22:05:18.646745  178815 system_pods.go:89] "etcd-old-k8s-version-061725" [6589077c-d559-4773-b435-26d82965b299] Running
	I1013 22:05:18.646751  178815 system_pods.go:89] "kindnet-8j8n7" [635ce300-372b-48da-b8ea-5fceaf8b6add] Running
	I1013 22:05:18.646755  178815 system_pods.go:89] "kube-apiserver-old-k8s-version-061725" [f9a32dcc-67c3-4c39-a136-4e5bf1e802e8] Running
	I1013 22:05:18.646761  178815 system_pods.go:89] "kube-controller-manager-old-k8s-version-061725" [58899ecf-40cb-46e4-a680-d40ffebea3f8] Running
	I1013 22:05:18.646765  178815 system_pods.go:89] "kube-proxy-kglxn" [046c9623-16c1-4968-a733-8f25a8601930] Running
	I1013 22:05:18.646769  178815 system_pods.go:89] "kube-scheduler-old-k8s-version-061725" [7320e402-46f1-42bf-a89b-3ff685c76155] Running
	I1013 22:05:18.646773  178815 system_pods.go:89] "storage-provisioner" [47d1825a-9ebe-4730-b56d-677a008d0099] Running
	I1013 22:05:18.646780  178815 system_pods.go:126] duration metric: took 1.440431544s to wait for k8s-apps to be running ...
	I1013 22:05:18.646792  178815 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:05:18.646859  178815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:05:18.659869  178815 system_svc.go:56] duration metric: took 13.067491ms WaitForService to wait for kubelet
	I1013 22:05:18.659893  178815 kubeadm.go:586] duration metric: took 15.664371418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:05:18.659912  178815 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:05:18.662684  178815 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:05:18.662714  178815 node_conditions.go:123] node cpu capacity is 2
	I1013 22:05:18.662727  178815 node_conditions.go:105] duration metric: took 2.810285ms to run NodePressure ...
	I1013 22:05:18.662739  178815 start.go:241] waiting for startup goroutines ...
	I1013 22:05:18.662747  178815 start.go:246] waiting for cluster config update ...
	I1013 22:05:18.662757  178815 start.go:255] writing updated cluster config ...
	I1013 22:05:18.663032  178815 ssh_runner.go:195] Run: rm -f paused
	I1013 22:05:18.666753  178815 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:18.671746  178815 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6k2fk" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:18.676692  178815 pod_ready.go:94] pod "coredns-5dd5756b68-6k2fk" is "Ready"
	I1013 22:05:18.676717  178815 pod_ready.go:86] duration metric: took 4.948472ms for pod "coredns-5dd5756b68-6k2fk" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:18.679523  178815 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:18.684362  178815 pod_ready.go:94] pod "etcd-old-k8s-version-061725" is "Ready"
	I1013 22:05:18.684394  178815 pod_ready.go:86] duration metric: took 4.843744ms for pod "etcd-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:18.687235  178815 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:18.691965  178815 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-061725" is "Ready"
	I1013 22:05:18.691992  178815 pod_ready.go:86] duration metric: took 4.732108ms for pod "kube-apiserver-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:18.695019  178815 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:19.071640  178815 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-061725" is "Ready"
	I1013 22:05:19.071680  178815 pod_ready.go:86] duration metric: took 376.636697ms for pod "kube-controller-manager-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:19.271734  178815 pod_ready.go:83] waiting for pod "kube-proxy-kglxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:19.671202  178815 pod_ready.go:94] pod "kube-proxy-kglxn" is "Ready"
	I1013 22:05:19.671228  178815 pod_ready.go:86] duration metric: took 399.467597ms for pod "kube-proxy-kglxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:19.871765  178815 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:20.271450  178815 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-061725" is "Ready"
	I1013 22:05:20.271479  178815 pod_ready.go:86] duration metric: took 399.652625ms for pod "kube-scheduler-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:05:20.271493  178815 pod_ready.go:40] duration metric: took 1.604699515s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:05:20.329309  178815 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1013 22:05:20.332438  178815 out.go:203] 
	W1013 22:05:20.335328  178815 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 22:05:20.338593  178815 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 22:05:20.341836  178815 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-061725" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:05:17 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:17.457217272Z" level=info msg="Created container ae152d74dbb42d96e34a56793829e89d8dd97c40b9de4bc8c535487b72ccf7a4: kube-system/coredns-5dd5756b68-6k2fk/coredns" id=741617f2-57e1-47fe-b791-5f51f7602161 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:17 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:17.461594838Z" level=info msg="Starting container: ae152d74dbb42d96e34a56793829e89d8dd97c40b9de4bc8c535487b72ccf7a4" id=e0a4015b-1902-42a1-8fc5-5aeb6f3538c3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:17 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:17.472269403Z" level=info msg="Started container" PID=1937 containerID=ae152d74dbb42d96e34a56793829e89d8dd97c40b9de4bc8c535487b72ccf7a4 description=kube-system/coredns-5dd5756b68-6k2fk/coredns id=e0a4015b-1902-42a1-8fc5-5aeb6f3538c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a34583343fbc41543a28dac51f92b5fa8273aa7be82f513bcaf1e0966d9b92e1
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.17081089Z" level=info msg="Running pod sandbox: default/busybox/POD" id=72c9cc4f-44a6-4364-b6d0-e2a024ab492c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.170882355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.1760413Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b6e801f4b2e9e2d6011edca2b7a76a6ceaec1c8e44a4a6186c05e85b32c619ff UID:768e0ffa-efa2-4156-98c7-722ab5e3d117 NetNS:/var/run/netns/561728cf-b373-434f-bea9-11abc6925792 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400139c880}] Aliases:map[]}"
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.176090874Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.186821315Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b6e801f4b2e9e2d6011edca2b7a76a6ceaec1c8e44a4a6186c05e85b32c619ff UID:768e0ffa-efa2-4156-98c7-722ab5e3d117 NetNS:/var/run/netns/561728cf-b373-434f-bea9-11abc6925792 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400139c880}] Aliases:map[]}"
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.18696294Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.189943773Z" level=info msg="Ran pod sandbox b6e801f4b2e9e2d6011edca2b7a76a6ceaec1c8e44a4a6186c05e85b32c619ff with infra container: default/busybox/POD" id=72c9cc4f-44a6-4364-b6d0-e2a024ab492c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.19414526Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f3b39d55-a833-426f-b40e-b04043d5bd7a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.194279575Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f3b39d55-a833-426f-b40e-b04043d5bd7a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.194328066Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f3b39d55-a833-426f-b40e-b04043d5bd7a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.196839076Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8cf3909-cd7e-40b3-8e1b-84742d229a0d name=/runtime.v1.ImageService/PullImage
	Oct 13 22:05:21 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:21.200433465Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.370660116Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b8cf3909-cd7e-40b3-8e1b-84742d229a0d name=/runtime.v1.ImageService/PullImage
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.372537509Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93865630-c80c-4c46-9e4a-5637b3d463e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.382140158Z" level=info msg="Creating container: default/busybox/busybox" id=942eae52-8611-4fe6-a9f2-9132454219a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.383035931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.388989637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.389640345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.405666302Z" level=info msg="Created container 53e2b792b7d20399c5f3a6cb909bc3109befd2c38beb8261b9623df568284fc7: default/busybox/busybox" id=942eae52-8611-4fe6-a9f2-9132454219a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.406943472Z" level=info msg="Starting container: 53e2b792b7d20399c5f3a6cb909bc3109befd2c38beb8261b9623df568284fc7" id=b8271fc3-6d45-4a06-8de0-1569151b7941 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:05:23 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:23.408731374Z" level=info msg="Started container" PID=1998 containerID=53e2b792b7d20399c5f3a6cb909bc3109befd2c38beb8261b9623df568284fc7 description=default/busybox/busybox id=b8271fc3-6d45-4a06-8de0-1569151b7941 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6e801f4b2e9e2d6011edca2b7a76a6ceaec1c8e44a4a6186c05e85b32c619ff
	Oct 13 22:05:29 old-k8s-version-061725 crio[838]: time="2025-10-13T22:05:29.726458678Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	53e2b792b7d20       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   b6e801f4b2e9e       busybox                                          default
	ae152d74dbb42       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   a34583343fbc4       coredns-5dd5756b68-6k2fk                         kube-system
	f346f1b0d9594       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   9f7cf142b649d       storage-provisioner                              kube-system
	2ce6aced88ee3       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   eceda0fcc3a21       kindnet-8j8n7                                    kube-system
	2f47a683fdff5       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   1cfee6ce12c39       kube-proxy-kglxn                                 kube-system
	d00b61e5544e2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   2acd3b8388aa2       etcd-old-k8s-version-061725                      kube-system
	fa9bb4e233de1       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   70d1313bdaacb       kube-controller-manager-old-k8s-version-061725   kube-system
	abc0f3a8ba29d       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   eab1097abe517       kube-apiserver-old-k8s-version-061725            kube-system
	97ebf6bae446e       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   b8bb4d2af340a       kube-scheduler-old-k8s-version-061725            kube-system
	
	
	==> coredns [ae152d74dbb42d96e34a56793829e89d8dd97c40b9de4bc8c535487b72ccf7a4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50075 - 49551 "HINFO IN 1517376686824460276.7675708874648249191. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014068656s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-061725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-061725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=old-k8s-version-061725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_04_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-061725
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:05:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:05:20 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:05:20 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:05:20 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:05:20 +0000   Mon, 13 Oct 2025 22:05:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-061725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 eed254ae485045539e40b556fb2007ee
	  System UUID:                a4ee82dc-aa4f-4d44-9281-73541a0cdcab
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-6k2fk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-061725                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-8j8n7                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-061725             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-061725    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-kglxn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-061725             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-061725 event: Registered Node old-k8s-version-061725 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-061725 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 21:30] hrtimer: interrupt took 51471165 ns
	[Oct13 21:31] overlayfs: idmapped layers are currently not supported
	[Oct13 21:36] overlayfs: idmapped layers are currently not supported
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d00b61e5544e2aecb74606c30544aaac0e156ca9248a2f5f37d6f127f8300d92] <==
	{"level":"info","ts":"2025-10-13T22:04:43.240362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-13T22:04:43.242405Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-13T22:04:43.243576Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T22:04:43.243749Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T22:04:43.247813Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T22:04:43.248585Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T22:04:43.248661Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T22:04:43.400389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-13T22:04:43.400507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-13T22:04:43.400548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-13T22:04:43.400586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-13T22:04:43.40062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T22:04:43.400675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-13T22:04:43.400704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T22:04:43.402466Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-061725 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T22:04:43.402547Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:04:43.403582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T22:04:43.405456Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:04:43.405925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:04:43.410169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-13T22:04:43.415916Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T22:04:43.41598Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-13T22:04:43.416353Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:04:43.416518Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:04:43.416579Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 22:05:31 up  1:47,  0 user,  load average: 1.38, 1.36, 1.67
	Linux old-k8s-version-061725 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ce6aced88ee3cf78135358b33512c80f4406f50c988398ed4884421d6bbaca2] <==
	I1013 22:05:06.412178       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:05:06.412550       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:05:06.412681       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:05:06.412699       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:05:06.412713       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:05:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:05:06.613462       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:05:06.708193       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:05:06.708902       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:05:06.709017       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:05:06.811344       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:05:06.811376       1 metrics.go:72] Registering metrics
	I1013 22:05:06.811434       1 controller.go:711] "Syncing nftables rules"
	I1013 22:05:16.613110       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:05:16.613166       1 main.go:301] handling current node
	I1013 22:05:26.613417       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:05:26.613448       1 main.go:301] handling current node
	
	
	==> kube-apiserver [abc0f3a8ba29d7106e24171e41fb1d23f1d4d6313f46387fffbf3feac1c21985] <==
	I1013 22:04:46.913397       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1013 22:04:46.914092       1 shared_informer.go:318] Caches are synced for configmaps
	I1013 22:04:46.914898       1 aggregator.go:166] initial CRD sync complete...
	I1013 22:04:46.914950       1 autoregister_controller.go:141] Starting autoregister controller
	I1013 22:04:46.914981       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:04:46.915016       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:04:46.914236       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 22:04:46.918370       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:04:46.946704       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:04:47.636049       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:04:47.649953       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:04:47.650044       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:04:48.351577       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:04:48.401252       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:04:48.537151       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:04:48.551693       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1013 22:04:48.553171       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 22:04:48.560437       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:04:48.849977       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 22:04:49.916746       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 22:04:49.933284       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:04:49.951907       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	http2: server: error reading preface from client 192.168.85.2:42538: read tcp 192.168.85.2:8443->192.168.85.2:42538: read: connection reset by peer
	I1013 22:05:02.677648       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1013 22:05:02.685795       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fa9bb4e233de15b8dcb55da4035d0880001aeda26f89ff9b8dcf1056d6c88a57] <==
	I1013 22:05:02.689332       1 shared_informer.go:318] Caches are synced for resource quota
	I1013 22:05:02.692061       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1013 22:05:02.699262       1 shared_informer.go:318] Caches are synced for HPA
	I1013 22:05:02.705676       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8j8n7"
	I1013 22:05:02.722689       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kglxn"
	I1013 22:05:02.734172       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jtdpz"
	I1013 22:05:02.777823       1 shared_informer.go:318] Caches are synced for resource quota
	I1013 22:05:02.797180       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6k2fk"
	I1013 22:05:02.852717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.629902ms"
	I1013 22:05:02.890181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="36.950221ms"
	I1013 22:05:02.890335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.352µs"
	I1013 22:05:03.081380       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:05:03.081422       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 22:05:03.161394       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:05:04.704505       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1013 22:05:04.747676       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jtdpz"
	I1013 22:05:04.757131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.294965ms"
	I1013 22:05:04.772567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.056546ms"
	I1013 22:05:04.772855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.017µs"
	I1013 22:05:17.086851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.786µs"
	I1013 22:05:17.105980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.515µs"
	I1013 22:05:17.605080       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1013 22:05:18.276751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.852µs"
	I1013 22:05:18.308946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.382969ms"
	I1013 22:05:18.309105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.44µs"
	
	
	==> kube-proxy [2f47a683fdff5e0f379d998b620f7c9706a094071e84dd13d339c6e060237aef] <==
	I1013 22:05:04.222521       1 server_others.go:69] "Using iptables proxy"
	I1013 22:05:04.266436       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1013 22:05:04.326668       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:05:04.328807       1 server_others.go:152] "Using iptables Proxier"
	I1013 22:05:04.328837       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 22:05:04.328844       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 22:05:04.328875       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 22:05:04.329062       1 server.go:846] "Version info" version="v1.28.0"
	I1013 22:05:04.329071       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:05:04.330248       1 config.go:188] "Starting service config controller"
	I1013 22:05:04.330257       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 22:05:04.330283       1 config.go:97] "Starting endpoint slice config controller"
	I1013 22:05:04.330287       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 22:05:04.330635       1 config.go:315] "Starting node config controller"
	I1013 22:05:04.330641       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 22:05:04.431407       1 shared_informer.go:318] Caches are synced for node config
	I1013 22:05:04.431443       1 shared_informer.go:318] Caches are synced for service config
	I1013 22:05:04.431488       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [97ebf6bae446e1d40236c88fdc11e41c98e405cb702dbfb2ac74ee904036c393] <==
	W1013 22:04:46.903962       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1013 22:04:46.904018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1013 22:04:46.904025       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1013 22:04:46.904081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1013 22:04:46.904069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1013 22:04:46.904146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1013 22:04:46.904120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1013 22:04:46.904230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1013 22:04:46.903907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1013 22:04:46.904295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1013 22:04:47.741395       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1013 22:04:47.741512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1013 22:04:47.769211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1013 22:04:47.769250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1013 22:04:47.785239       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1013 22:04:47.786738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1013 22:04:47.872835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1013 22:04:47.872875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1013 22:04:47.914195       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1013 22:04:47.914238       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1013 22:04:47.918355       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1013 22:04:47.918388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1013 22:04:48.000378       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1013 22:04:48.000423       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1013 22:04:49.791992       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: E1013 22:05:02.759292    1378 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-061725" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-061725' and this object
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.818320    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/046c9623-16c1-4968-a733-8f25a8601930-xtables-lock\") pod \"kube-proxy-kglxn\" (UID: \"046c9623-16c1-4968-a733-8f25a8601930\") " pod="kube-system/kube-proxy-kglxn"
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.818520    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/046c9623-16c1-4968-a733-8f25a8601930-lib-modules\") pod \"kube-proxy-kglxn\" (UID: \"046c9623-16c1-4968-a733-8f25a8601930\") " pod="kube-system/kube-proxy-kglxn"
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.818621    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/635ce300-372b-48da-b8ea-5fceaf8b6add-lib-modules\") pod \"kindnet-8j8n7\" (UID: \"635ce300-372b-48da-b8ea-5fceaf8b6add\") " pod="kube-system/kindnet-8j8n7"
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.818723    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gflp6\" (UniqueName: \"kubernetes.io/projected/635ce300-372b-48da-b8ea-5fceaf8b6add-kube-api-access-gflp6\") pod \"kindnet-8j8n7\" (UID: \"635ce300-372b-48da-b8ea-5fceaf8b6add\") " pod="kube-system/kindnet-8j8n7"
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.818830    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/635ce300-372b-48da-b8ea-5fceaf8b6add-cni-cfg\") pod \"kindnet-8j8n7\" (UID: \"635ce300-372b-48da-b8ea-5fceaf8b6add\") " pod="kube-system/kindnet-8j8n7"
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.818922    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/635ce300-372b-48da-b8ea-5fceaf8b6add-xtables-lock\") pod \"kindnet-8j8n7\" (UID: \"635ce300-372b-48da-b8ea-5fceaf8b6add\") " pod="kube-system/kindnet-8j8n7"
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.819009    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/046c9623-16c1-4968-a733-8f25a8601930-kube-proxy\") pod \"kube-proxy-kglxn\" (UID: \"046c9623-16c1-4968-a733-8f25a8601930\") " pod="kube-system/kube-proxy-kglxn"
	Oct 13 22:05:02 old-k8s-version-061725 kubelet[1378]: I1013 22:05:02.819103    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fc4b\" (UniqueName: \"kubernetes.io/projected/046c9623-16c1-4968-a733-8f25a8601930-kube-api-access-8fc4b\") pod \"kube-proxy-kglxn\" (UID: \"046c9623-16c1-4968-a733-8f25a8601930\") " pod="kube-system/kube-proxy-kglxn"
	Oct 13 22:05:03 old-k8s-version-061725 kubelet[1378]: W1013 22:05:03.072389    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/crio-eceda0fcc3a21f0329bb3c44aa6ae535555cd8917e003f1923884f84530528b1 WatchSource:0}: Error finding container eceda0fcc3a21f0329bb3c44aa6ae535555cd8917e003f1923884f84530528b1: Status 404 returned error can't find the container with id eceda0fcc3a21f0329bb3c44aa6ae535555cd8917e003f1923884f84530528b1
	Oct 13 22:05:03 old-k8s-version-061725 kubelet[1378]: W1013 22:05:03.979073    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/crio-1cfee6ce12c39ebbf05669262a126852cada7298017b736b49d144e4a64554ac WatchSource:0}: Error finding container 1cfee6ce12c39ebbf05669262a126852cada7298017b736b49d144e4a64554ac: Status 404 returned error can't find the container with id 1cfee6ce12c39ebbf05669262a126852cada7298017b736b49d144e4a64554ac
	Oct 13 22:05:04 old-k8s-version-061725 kubelet[1378]: I1013 22:05:04.258643    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kglxn" podStartSLOduration=2.258595557 podCreationTimestamp="2025-10-13 22:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:05:04.249651377 +0000 UTC m=+14.369229771" watchObservedRunningTime="2025-10-13 22:05:04.258595557 +0000 UTC m=+14.378173959"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.051386    1378 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.084351    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8j8n7" podStartSLOduration=11.883625538 podCreationTimestamp="2025-10-13 22:05:02 +0000 UTC" firstStartedPulling="2025-10-13 22:05:03.088989516 +0000 UTC m=+13.208567910" lastFinishedPulling="2025-10-13 22:05:06.289671967 +0000 UTC m=+16.409250369" observedRunningTime="2025-10-13 22:05:07.249828914 +0000 UTC m=+17.369407316" watchObservedRunningTime="2025-10-13 22:05:17.084307997 +0000 UTC m=+27.203886399"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.084710    1378 topology_manager.go:215] "Topology Admit Handler" podUID="c1ae429b-61b6-4c93-8de5-ceef5fad5f55" podNamespace="kube-system" podName="coredns-5dd5756b68-6k2fk"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.090497    1378 topology_manager.go:215] "Topology Admit Handler" podUID="47d1825a-9ebe-4730-b56d-677a008d0099" podNamespace="kube-system" podName="storage-provisioner"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.237290    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4k48\" (UniqueName: \"kubernetes.io/projected/47d1825a-9ebe-4730-b56d-677a008d0099-kube-api-access-p4k48\") pod \"storage-provisioner\" (UID: \"47d1825a-9ebe-4730-b56d-677a008d0099\") " pod="kube-system/storage-provisioner"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.237345    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1ae429b-61b6-4c93-8de5-ceef5fad5f55-config-volume\") pod \"coredns-5dd5756b68-6k2fk\" (UID: \"c1ae429b-61b6-4c93-8de5-ceef5fad5f55\") " pod="kube-system/coredns-5dd5756b68-6k2fk"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.237373    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95vmg\" (UniqueName: \"kubernetes.io/projected/c1ae429b-61b6-4c93-8de5-ceef5fad5f55-kube-api-access-95vmg\") pod \"coredns-5dd5756b68-6k2fk\" (UID: \"c1ae429b-61b6-4c93-8de5-ceef5fad5f55\") " pod="kube-system/coredns-5dd5756b68-6k2fk"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: I1013 22:05:17.237397    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47d1825a-9ebe-4730-b56d-677a008d0099-tmp\") pod \"storage-provisioner\" (UID: \"47d1825a-9ebe-4730-b56d-677a008d0099\") " pod="kube-system/storage-provisioner"
	Oct 13 22:05:17 old-k8s-version-061725 kubelet[1378]: W1013 22:05:17.403638    1378 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/crio-9f7cf142b649d9f3b3da39f2be3307f4978e4861db9b808e6e73215b33ca4c32 WatchSource:0}: Error finding container 9f7cf142b649d9f3b3da39f2be3307f4978e4861db9b808e6e73215b33ca4c32: Status 404 returned error can't find the container with id 9f7cf142b649d9f3b3da39f2be3307f4978e4861db9b808e6e73215b33ca4c32
	Oct 13 22:05:18 old-k8s-version-061725 kubelet[1378]: I1013 22:05:18.292743    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6k2fk" podStartSLOduration=16.292699285 podCreationTimestamp="2025-10-13 22:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:05:18.278219989 +0000 UTC m=+28.397798383" watchObservedRunningTime="2025-10-13 22:05:18.292699285 +0000 UTC m=+28.412277679"
	Oct 13 22:05:20 old-k8s-version-061725 kubelet[1378]: I1013 22:05:20.568276    1378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.568230936 podCreationTimestamp="2025-10-13 22:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:05:18.315008179 +0000 UTC m=+28.434586605" watchObservedRunningTime="2025-10-13 22:05:20.568230936 +0000 UTC m=+30.687809329"
	Oct 13 22:05:20 old-k8s-version-061725 kubelet[1378]: I1013 22:05:20.569070    1378 topology_manager.go:215] "Topology Admit Handler" podUID="768e0ffa-efa2-4156-98c7-722ab5e3d117" podNamespace="default" podName="busybox"
	Oct 13 22:05:20 old-k8s-version-061725 kubelet[1378]: I1013 22:05:20.757157    1378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf245\" (UniqueName: \"kubernetes.io/projected/768e0ffa-efa2-4156-98c7-722ab5e3d117-kube-api-access-lf245\") pod \"busybox\" (UID: \"768e0ffa-efa2-4156-98c7-722ab5e3d117\") " pod="default/busybox"
	
	
	==> storage-provisioner [f346f1b0d959465299c2300f288431de6aec92ff897ad5d9ea29c0980dfcd380] <==
	I1013 22:05:17.473507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:05:17.493895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:05:17.494056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 22:05:17.507694       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:05:17.509715       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7559b29-90e7-44b0-9ce8-e3c256861aa5", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-061725_b757201f-1e37-49d2-9631-271b2cb0acfc became leader
	I1013 22:05:17.511523       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061725_b757201f-1e37-49d2-9631-271b2cb0acfc!
	I1013 22:05:17.616248       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061725_b757201f-1e37-49d2-9631-271b2cb0acfc!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-061725 -n old-k8s-version-061725
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-061725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-061725 --alsologtostderr -v=1
E1013 22:06:50.818669    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-061725 --alsologtostderr -v=1: exit status 80 (2.181348735s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-061725 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:06:48.716343  188395 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:06:48.716460  188395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:06:48.716466  188395 out.go:374] Setting ErrFile to fd 2...
	I1013 22:06:48.716471  188395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:06:48.716815  188395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:06:48.717086  188395 out.go:368] Setting JSON to false
	I1013 22:06:48.717100  188395 mustload.go:65] Loading cluster: old-k8s-version-061725
	I1013 22:06:48.719366  188395 config.go:182] Loaded profile config "old-k8s-version-061725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:06:48.719938  188395 cli_runner.go:164] Run: docker container inspect old-k8s-version-061725 --format={{.State.Status}}
	I1013 22:06:48.740425  188395 host.go:66] Checking if "old-k8s-version-061725" exists ...
	I1013 22:06:48.740876  188395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:06:48.858137  188395 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:70 SystemTime:2025-10-13 22:06:48.841341174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:06:48.859311  188395 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-061725 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:06:48.867761  188395 out.go:179] * Pausing node old-k8s-version-061725 ... 
	I1013 22:06:48.885497  188395 host.go:66] Checking if "old-k8s-version-061725" exists ...
	I1013 22:06:48.886681  188395 ssh_runner.go:195] Run: systemctl --version
	I1013 22:06:48.886744  188395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-061725
	I1013 22:06:48.907850  188395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/old-k8s-version-061725/id_rsa Username:docker}
	I1013 22:06:49.019025  188395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:06:49.041373  188395 pause.go:52] kubelet running: true
	I1013 22:06:49.041435  188395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:06:49.367598  188395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:06:49.367683  188395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:06:49.476742  188395 cri.go:89] found id: "f4214d686cc551d918ce7ab3ebc086aed6b9ef041c9d4f95ae3e52094f9f8fe4"
	I1013 22:06:49.476816  188395 cri.go:89] found id: "21645635ce14d06941230e4e6235b9280959f93614831d10abae4cb0b70f1236"
	I1013 22:06:49.476835  188395 cri.go:89] found id: "b5cfe60fee50aca9adeb9a7210f96baa84cf5ff86310fb648ab48513f8990dd9"
	I1013 22:06:49.476853  188395 cri.go:89] found id: "cb2cbcad768d11f9ca1e964c26de2e6f7d02f101ebb55b59faa4319adff9e6db"
	I1013 22:06:49.476883  188395 cri.go:89] found id: "c648129a3253eaa1e9d7547c6256957ad5a93b39cb7716180e8208547ea6cdcc"
	I1013 22:06:49.476904  188395 cri.go:89] found id: "6eed8544403e99d9185f9c6d9e6d28a7fdd3896c087aeec1df5be870a03bbce0"
	I1013 22:06:49.476921  188395 cri.go:89] found id: "b8caee63181a735c804ef9eb3da1040c9ec20a7c106dec4be1a1e2979c1008be"
	I1013 22:06:49.476936  188395 cri.go:89] found id: "7b9a569532bd578665c2febc0e862c8b3dfa6aa451acd0888258fd6f6bd613b9"
	I1013 22:06:49.476964  188395 cri.go:89] found id: "ea02ef13f9182b9021218457ea3dab09ac4d242a483e773331138defe8ef3896"
	I1013 22:06:49.476987  188395 cri.go:89] found id: "99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5"
	I1013 22:06:49.477003  188395 cri.go:89] found id: "bee272b4edb8bfa59232efb28167b53a742045a704a78d5cb04dab0c16c607ad"
	I1013 22:06:49.477019  188395 cri.go:89] found id: ""
	I1013 22:06:49.477101  188395 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:06:49.554969  188395 retry.go:31] will retry after 208.22519ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:06:49Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:06:49.763388  188395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:06:49.779465  188395 pause.go:52] kubelet running: false
	I1013 22:06:49.779543  188395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:06:49.996252  188395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:06:49.996347  188395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:06:50.092640  188395 cri.go:89] found id: "f4214d686cc551d918ce7ab3ebc086aed6b9ef041c9d4f95ae3e52094f9f8fe4"
	I1013 22:06:50.092664  188395 cri.go:89] found id: "21645635ce14d06941230e4e6235b9280959f93614831d10abae4cb0b70f1236"
	I1013 22:06:50.092671  188395 cri.go:89] found id: "b5cfe60fee50aca9adeb9a7210f96baa84cf5ff86310fb648ab48513f8990dd9"
	I1013 22:06:50.092675  188395 cri.go:89] found id: "cb2cbcad768d11f9ca1e964c26de2e6f7d02f101ebb55b59faa4319adff9e6db"
	I1013 22:06:50.092678  188395 cri.go:89] found id: "c648129a3253eaa1e9d7547c6256957ad5a93b39cb7716180e8208547ea6cdcc"
	I1013 22:06:50.092682  188395 cri.go:89] found id: "6eed8544403e99d9185f9c6d9e6d28a7fdd3896c087aeec1df5be870a03bbce0"
	I1013 22:06:50.092685  188395 cri.go:89] found id: "b8caee63181a735c804ef9eb3da1040c9ec20a7c106dec4be1a1e2979c1008be"
	I1013 22:06:50.092688  188395 cri.go:89] found id: "7b9a569532bd578665c2febc0e862c8b3dfa6aa451acd0888258fd6f6bd613b9"
	I1013 22:06:50.092691  188395 cri.go:89] found id: "ea02ef13f9182b9021218457ea3dab09ac4d242a483e773331138defe8ef3896"
	I1013 22:06:50.092697  188395 cri.go:89] found id: "99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5"
	I1013 22:06:50.092706  188395 cri.go:89] found id: "bee272b4edb8bfa59232efb28167b53a742045a704a78d5cb04dab0c16c607ad"
	I1013 22:06:50.092712  188395 cri.go:89] found id: ""
	I1013 22:06:50.092763  188395 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:06:50.107015  188395 retry.go:31] will retry after 352.718926ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:06:50Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:06:50.460319  188395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:06:50.473060  188395 pause.go:52] kubelet running: false
	I1013 22:06:50.473120  188395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:06:50.702896  188395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:06:50.702968  188395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:06:50.785914  188395 cri.go:89] found id: "f4214d686cc551d918ce7ab3ebc086aed6b9ef041c9d4f95ae3e52094f9f8fe4"
	I1013 22:06:50.785933  188395 cri.go:89] found id: "21645635ce14d06941230e4e6235b9280959f93614831d10abae4cb0b70f1236"
	I1013 22:06:50.785938  188395 cri.go:89] found id: "b5cfe60fee50aca9adeb9a7210f96baa84cf5ff86310fb648ab48513f8990dd9"
	I1013 22:06:50.785942  188395 cri.go:89] found id: "cb2cbcad768d11f9ca1e964c26de2e6f7d02f101ebb55b59faa4319adff9e6db"
	I1013 22:06:50.785946  188395 cri.go:89] found id: "c648129a3253eaa1e9d7547c6256957ad5a93b39cb7716180e8208547ea6cdcc"
	I1013 22:06:50.785950  188395 cri.go:89] found id: "6eed8544403e99d9185f9c6d9e6d28a7fdd3896c087aeec1df5be870a03bbce0"
	I1013 22:06:50.785953  188395 cri.go:89] found id: "b8caee63181a735c804ef9eb3da1040c9ec20a7c106dec4be1a1e2979c1008be"
	I1013 22:06:50.785956  188395 cri.go:89] found id: "7b9a569532bd578665c2febc0e862c8b3dfa6aa451acd0888258fd6f6bd613b9"
	I1013 22:06:50.785959  188395 cri.go:89] found id: "ea02ef13f9182b9021218457ea3dab09ac4d242a483e773331138defe8ef3896"
	I1013 22:06:50.785969  188395 cri.go:89] found id: "99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5"
	I1013 22:06:50.785973  188395 cri.go:89] found id: "bee272b4edb8bfa59232efb28167b53a742045a704a78d5cb04dab0c16c607ad"
	I1013 22:06:50.785976  188395 cri.go:89] found id: ""
	I1013 22:06:50.786026  188395 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:06:50.804798  188395 out.go:203] 
	W1013 22:06:50.808214  188395 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:06:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:06:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:06:50.808274  188395 out.go:285] * 
	* 
	W1013 22:06:50.814081  188395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:06:50.818534  188395 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-061725 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-061725
helpers_test.go:243: (dbg) docker inspect old-k8s-version-061725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041",
	        "Created": "2025-10-13T22:04:24.643297678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:05:44.548725553Z",
	            "FinishedAt": "2025-10-13T22:05:43.809166371Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/hosts",
	        "LogPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041-json.log",
	        "Name": "/old-k8s-version-061725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-061725:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-061725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041",
	                "LowerDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-061725",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-061725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-061725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-061725",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-061725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "caa917960a47798ece5d82fa14160d4e7e25e9c246505f70866fbc11a26c40a2",
	            "SandboxKey": "/var/run/docker/netns/caa917960a47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-061725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:8a:ab:96:0f:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "342c36a433557ef5e18c5eb6a5e2eade730d4334bff3f113c0f457eda67e9161",
	                    "EndpointID": "a4f4bbf924933ac83086927e6fb72694bbc6a68ba52cbcc18e4d5cd8b338f5f5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-061725",
	                        "9b67329f891f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725: exit status 2 (463.131925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-061725 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-061725 logs -n 25: (1.791387578s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-122822 sudo containerd config dump                                                                                                                                                                                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo crio config                                                                                                                                                                                                             │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ delete  │ -p cilium-122822                                                                                                                                                                                                                              │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │ 13 Oct 25 21:55 UTC │
	│ start   │ -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ force-systemd-flag-257205 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-flag-257205                                                                                                                                                                                                                  │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-env-312094                                                                                                                                                                                                                   │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-options-194931 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ cert-options-194931 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p cert-options-194931 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-options-194931                                                                                                                                                                                                                        │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ stop    │ -p old-k8s-version-061725 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-061725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:06:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:06:09.758062  185484 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:06:09.758231  185484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:06:09.758259  185484 out.go:374] Setting ErrFile to fd 2...
	I1013 22:06:09.758279  185484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:06:09.758554  185484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:06:09.758991  185484 out.go:368] Setting JSON to false
	I1013 22:06:09.760007  185484 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6504,"bootTime":1760386666,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:06:09.760111  185484 start.go:141] virtualization:  
	I1013 22:06:09.763856  185484 out.go:179] * [no-preload-998398] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:06:09.767865  185484 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:06:09.767947  185484 notify.go:220] Checking for updates...
	I1013 22:06:09.774318  185484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:06:09.777374  185484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:06:09.780300  185484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:06:09.783241  185484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:06:09.786813  185484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:06:09.790410  185484 config.go:182] Loaded profile config "old-k8s-version-061725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:06:09.790511  185484 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:06:09.831217  185484 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:06:09.832797  185484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:06:09.888338  185484 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:06:09.87958939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:06:09.888435  185484 docker.go:318] overlay module found
	I1013 22:06:09.891745  185484 out.go:179] * Using the docker driver based on user configuration
	I1013 22:06:09.894691  185484 start.go:305] selected driver: docker
	I1013 22:06:09.894714  185484 start.go:925] validating driver "docker" against <nil>
	I1013 22:06:09.894727  185484 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:06:09.895428  185484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:06:09.949700  185484 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:06:09.940288281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:06:09.949857  185484 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:06:09.950087  185484 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:06:09.953110  185484 out.go:179] * Using Docker driver with root privileges
	I1013 22:06:09.955987  185484 cni.go:84] Creating CNI manager for ""
	I1013 22:06:09.956056  185484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:06:09.956076  185484 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:06:09.956164  185484 start.go:349] cluster config:
	{Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:06:09.959236  185484 out.go:179] * Starting "no-preload-998398" primary control-plane node in "no-preload-998398" cluster
	I1013 22:06:09.962141  185484 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:06:09.965143  185484 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:06:09.967863  185484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:06:09.967936  185484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:06:09.968001  185484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json ...
	I1013 22:06:09.968030  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json: {Name:mk22b74861007882575fd7cb7615d8974646132e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:09.968282  185484 cache.go:107] acquiring lock: {Name:mk9e23294529848fca5421602e65fa540d2ffe9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.969094  185484 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1013 22:06:09.969116  185484 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 843.54µs
	I1013 22:06:09.969131  185484 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1013 22:06:09.969148  185484 cache.go:107] acquiring lock: {Name:mkb3086799a14ff1ebfc52e9ac9fba7b29bb30fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.970007  185484 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:09.970369  185484 cache.go:107] acquiring lock: {Name:mkee07c6d8760320632919489ff1ecb2e0d22d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.970490  185484 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:09.970758  185484 cache.go:107] acquiring lock: {Name:mkfa4d23a7d0256f3cdf1cb2f33382ba7dbbfc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.970872  185484 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:09.971141  185484 cache.go:107] acquiring lock: {Name:mk62ce0678b4b3038f2e150b1ed151bc360f3641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.971244  185484 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:09.971455  185484 cache.go:107] acquiring lock: {Name:mkb1d39c539d858c9b1c08f39ea3287bd6d91313 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.971560  185484 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1013 22:06:09.971758  185484 cache.go:107] acquiring lock: {Name:mk2eb24896c7f2889da7dd223ade65489103932b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.971988  185484 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:09.972231  185484 cache.go:107] acquiring lock: {Name:mkc9af2ce906bde484aa6a725326e8aa7fddb608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.972408  185484 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:09.973758  185484 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:09.974272  185484 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:09.974508  185484 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:09.974662  185484 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1013 22:06:09.974788  185484 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:09.974929  185484 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:09.975061  185484 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:09.995384  185484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:06:09.995409  185484 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:06:09.995427  185484 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:06:09.995449  185484 start.go:360] acquireMachinesLock for no-preload-998398: {Name:mk31dc6d65eb1bd4951f5e4881803fab3fbc7962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.995563  185484 start.go:364] duration metric: took 95.071µs to acquireMachinesLock for "no-preload-998398"
	I1013 22:06:09.995598  185484 start.go:93] Provisioning new machine with config: &{Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:06:09.995671  185484 start.go:125] createHost starting for "" (driver="docker")
	W1013 22:06:10.323066  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:12.329203  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:09.999369  185484 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:06:09.999602  185484 start.go:159] libmachine.API.Create for "no-preload-998398" (driver="docker")
	I1013 22:06:09.999710  185484 client.go:168] LocalClient.Create starting
	I1013 22:06:09.999826  185484 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:06:09.999872  185484 main.go:141] libmachine: Decoding PEM data...
	I1013 22:06:09.999890  185484 main.go:141] libmachine: Parsing certificate...
	I1013 22:06:09.999942  185484 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:06:09.999966  185484 main.go:141] libmachine: Decoding PEM data...
	I1013 22:06:09.999979  185484 main.go:141] libmachine: Parsing certificate...
	I1013 22:06:10.000409  185484 cli_runner.go:164] Run: docker network inspect no-preload-998398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:06:10.031153  185484 cli_runner.go:211] docker network inspect no-preload-998398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:06:10.031268  185484 network_create.go:284] running [docker network inspect no-preload-998398] to gather additional debugging logs...
	I1013 22:06:10.031292  185484 cli_runner.go:164] Run: docker network inspect no-preload-998398
	W1013 22:06:10.049772  185484 cli_runner.go:211] docker network inspect no-preload-998398 returned with exit code 1
	I1013 22:06:10.049842  185484 network_create.go:287] error running [docker network inspect no-preload-998398]: docker network inspect no-preload-998398: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-998398 not found
	I1013 22:06:10.049856  185484 network_create.go:289] output of [docker network inspect no-preload-998398]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-998398 not found
	
	** /stderr **
	I1013 22:06:10.049954  185484 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:06:10.068330  185484 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:06:10.068678  185484 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:06:10.069019  185484 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:06:10.069529  185484 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bbb9a0}
	I1013 22:06:10.069555  185484 network_create.go:124] attempt to create docker network no-preload-998398 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:06:10.069622  185484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-998398 no-preload-998398
	I1013 22:06:10.157217  185484 network_create.go:108] docker network no-preload-998398 192.168.76.0/24 created
	I1013 22:06:10.157255  185484 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-998398" container
	I1013 22:06:10.157358  185484 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:06:10.178661  185484 cli_runner.go:164] Run: docker volume create no-preload-998398 --label name.minikube.sigs.k8s.io=no-preload-998398 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:06:10.199922  185484 oci.go:103] Successfully created a docker volume no-preload-998398
	I1013 22:06:10.199996  185484 cli_runner.go:164] Run: docker run --rm --name no-preload-998398-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-998398 --entrypoint /usr/bin/test -v no-preload-998398:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:06:10.295862  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1013 22:06:10.314504  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1013 22:06:10.323489  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1013 22:06:10.332060  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1013 22:06:10.335213  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1013 22:06:10.344102  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1013 22:06:10.365637  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1013 22:06:10.386385  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1013 22:06:10.386460  185484 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 415.010761ms
	I1013 22:06:10.386493  185484 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1013 22:06:10.796060  185484 oci.go:107] Successfully prepared a docker volume no-preload-998398
	I1013 22:06:10.796089  185484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1013 22:06:10.796262  185484 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:06:10.796412  185484 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:06:10.872669  185484 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-998398 --name no-preload-998398 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-998398 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-998398 --network no-preload-998398 --ip 192.168.76.2 --volume no-preload-998398:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:06:10.961708  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1013 22:06:10.961772  185484 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 990.633677ms
	I1013 22:06:10.961798  185484 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1013 22:06:11.288893  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Running}}
	I1013 22:06:11.350544  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:06:11.375222  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1013 22:06:11.375251  185484 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.404496536s
	I1013 22:06:11.375264  185484 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1013 22:06:11.417761  185484 cli_runner.go:164] Run: docker exec no-preload-998398 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:06:11.497162  185484 oci.go:144] the created container "no-preload-998398" has a running status.
	I1013 22:06:11.497683  185484 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa...
	I1013 22:06:11.497222  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1013 22:06:11.497747  185484 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.525516694s
	I1013 22:06:11.497759  185484 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1013 22:06:11.497303  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1013 22:06:11.497771  185484 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.527408799s
	I1013 22:06:11.497777  185484 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1013 22:06:11.543005  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1013 22:06:11.543039  185484 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.573893029s
	I1013 22:06:11.543050  185484 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1013 22:06:11.965730  185484 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:06:12.005648  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:06:12.026927  185484 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:06:12.026953  185484 kic_runner.go:114] Args: [docker exec --privileged no-preload-998398 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:06:12.100614  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:06:12.129749  185484 machine.go:93] provisionDockerMachine start ...
	I1013 22:06:12.129836  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:12.157900  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:12.158223  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:12.158233  185484 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:06:12.160148  185484 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:06:13.342097  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1013 22:06:13.342172  185484 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 3.370415876s
	I1013 22:06:13.342198  185484 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1013 22:06:13.342220  185484 cache.go:87] Successfully saved all images to host disk.
	W1013 22:06:14.823822  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:17.329790  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:15.327455  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-998398
	
	I1013 22:06:15.327480  185484 ubuntu.go:182] provisioning hostname "no-preload-998398"
	I1013 22:06:15.327545  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:15.352232  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:15.352553  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:15.352572  185484 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-998398 && echo "no-preload-998398" | sudo tee /etc/hostname
	I1013 22:06:15.520453  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-998398
	
	I1013 22:06:15.520549  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:15.543385  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:15.543687  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:15.543706  185484 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-998398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-998398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-998398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:06:15.703892  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:06:15.703935  185484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:06:15.703956  185484 ubuntu.go:190] setting up certificates
	I1013 22:06:15.703965  185484 provision.go:84] configureAuth start
	I1013 22:06:15.704029  185484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:06:15.725148  185484 provision.go:143] copyHostCerts
	I1013 22:06:15.725210  185484 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:06:15.725221  185484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:06:15.725292  185484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:06:15.725379  185484 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:06:15.725384  185484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:06:15.725409  185484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:06:15.725458  185484 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:06:15.725462  185484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:06:15.725485  185484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:06:15.725528  185484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.no-preload-998398 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-998398]
	I1013 22:06:16.458063  185484 provision.go:177] copyRemoteCerts
	I1013 22:06:16.458124  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:06:16.458161  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:16.477254  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:16.580357  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:06:16.616465  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:06:16.637814  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:06:16.658512  185484 provision.go:87] duration metric: took 954.528885ms to configureAuth
	I1013 22:06:16.658584  185484 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:06:16.658813  185484 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:06:16.658965  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:16.679764  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:16.680198  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:16.680231  185484 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:06:16.977118  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:06:16.977186  185484 machine.go:96] duration metric: took 4.847418213s to provisionDockerMachine
	I1013 22:06:16.977209  185484 client.go:171] duration metric: took 6.977486545s to LocalClient.Create
	I1013 22:06:16.977238  185484 start.go:167] duration metric: took 6.977637228s to libmachine.API.Create "no-preload-998398"
	I1013 22:06:16.977278  185484 start.go:293] postStartSetup for "no-preload-998398" (driver="docker")
	I1013 22:06:16.977301  185484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:06:16.977386  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:06:16.977467  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.001009  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.110014  185484 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:06:17.113974  185484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:06:17.113999  185484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:06:17.114010  185484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:06:17.114062  185484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:06:17.114140  185484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:06:17.114239  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:06:17.122511  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:06:17.151377  185484 start.go:296] duration metric: took 174.073215ms for postStartSetup
	I1013 22:06:17.151770  185484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:06:17.173968  185484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json ...
	I1013 22:06:17.174340  185484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:06:17.174393  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.208496  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.317004  185484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:06:17.324408  185484 start.go:128] duration metric: took 7.328723311s to createHost
	I1013 22:06:17.324434  185484 start.go:83] releasing machines lock for "no-preload-998398", held for 7.32886067s
	I1013 22:06:17.324506  185484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:06:17.342413  185484 ssh_runner.go:195] Run: cat /version.json
	I1013 22:06:17.342468  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.342547  185484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:06:17.342623  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.369274  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.381313  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.483741  185484 ssh_runner.go:195] Run: systemctl --version
	I1013 22:06:17.587243  185484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:06:17.636407  185484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:06:17.642720  185484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:06:17.642800  185484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:06:17.692641  185484 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:06:17.692675  185484 start.go:495] detecting cgroup driver to use...
	I1013 22:06:17.692706  185484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:06:17.692770  185484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:06:17.715628  185484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:06:17.733378  185484 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:06:17.733448  185484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:06:17.758350  185484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:06:17.777215  185484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:06:17.934613  185484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:06:18.132728  185484 docker.go:234] disabling docker service ...
	I1013 22:06:18.132909  185484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:06:18.163507  185484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:06:18.179139  185484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:06:18.330408  185484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:06:18.503270  185484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:06:18.520076  185484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:06:18.537873  185484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:06:18.537987  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.548961  185484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:06:18.549068  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.560574  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.569695  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.583857  185484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:06:18.593382  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.602266  185484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.618383  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.627638  185484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:06:18.635911  185484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:06:18.644092  185484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:06:18.794803  185484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:06:19.370479  185484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:06:19.370568  185484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:06:19.379384  185484 start.go:563] Will wait 60s for crictl version
	I1013 22:06:19.379455  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:19.386387  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:06:19.419217  185484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:06:19.419307  185484 ssh_runner.go:195] Run: crio --version
	I1013 22:06:19.454804  185484 ssh_runner.go:195] Run: crio --version
	I1013 22:06:19.499577  185484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:06:19.502560  185484 cli_runner.go:164] Run: docker network inspect no-preload-998398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:06:19.523531  185484 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:06:19.529674  185484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:06:19.539501  185484 kubeadm.go:883] updating cluster {Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:06:19.539621  185484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:06:19.539661  185484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:06:19.575318  185484 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 22:06:19.575340  185484 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1013 22:06:19.575374  185484 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:19.575585  185484 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:19.575666  185484 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:19.575742  185484 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:19.575849  185484 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:19.575919  185484 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1013 22:06:19.575981  185484 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:19.576057  185484 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:19.578074  185484 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:19.578138  185484 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:19.578205  185484 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1013 22:06:19.578074  185484 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:19.578305  185484 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:19.578419  185484 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:19.578427  185484 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:19.578471  185484 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	W1013 22:06:19.843601  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:22.336817  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:19.798856  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1013 22:06:19.803945  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:19.807434  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:19.814206  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:19.819589  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:19.819751  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:19.851201  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:19.984442  185484 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1013 22:06:19.984572  185484 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1013 22:06:19.984647  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.047715  185484 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1013 22:06:20.047885  185484 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.047960  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.054015  185484 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1013 22:06:20.054131  185484 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.054208  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.078990  185484 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1013 22:06:20.079108  185484 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.079196  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.117370  185484 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1013 22:06:20.117456  185484 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.117529  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.117614  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:06:20.117681  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.117749  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.117809  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.117874  185484 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1013 22:06:20.118014  185484 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.118083  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.117914  185484 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1013 22:06:20.118166  185484 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.118241  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.258938  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.259042  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.259152  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:06:20.259239  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.259315  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.259394  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.259460  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.518926  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.519030  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.519117  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.519195  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:06:20.519270  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.519341  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.519412  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.732970  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1013 22:06:20.733090  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.733236  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.733286  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.733325  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1013 22:06:20.733237  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:06:20.733396  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1013 22:06:20.733428  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1013 22:06:20.733485  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:06:20.733518  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1013 22:06:20.733609  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1013 22:06:20.818158  185484 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1013 22:06:20.818389  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:20.840491  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1013 22:06:20.840530  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1013 22:06:20.840609  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1013 22:06:20.840675  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1013 22:06:20.840933  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1013 22:06:20.840693  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1013 22:06:20.841025  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1013 22:06:20.840712  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1013 22:06:20.841088  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1013 22:06:20.841108  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:06:20.840731  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1013 22:06:20.841145  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1013 22:06:20.840629  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1013 22:06:20.841215  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1013 22:06:20.974243  185484 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1013 22:06:20.974611  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1013 22:06:21.041975  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1013 22:06:21.042015  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1013 22:06:21.042105  185484 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1013 22:06:21.042185  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1013 22:06:21.042204  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1013 22:06:21.042259  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1013 22:06:21.042273  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1013 22:06:21.042306  185484 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:21.042365  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:21.591280  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:21.591363  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1013 22:06:21.777519  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:21.799343  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1013 22:06:21.799995  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1013 22:06:21.939266  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1013 22:06:24.823563  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:26.826619  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:24.762906  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.962877851s)
	I1013 22:06:24.762933  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1013 22:06:24.762952  185484 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:06:24.763009  185484 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.823717758s)
	I1013 22:06:24.763042  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1013 22:06:24.763152  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:06:24.763255  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:06:27.179407  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.416111824s)
	I1013 22:06:27.179435  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1013 22:06:27.179452  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:06:27.179496  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:06:27.179553  185484 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.416374801s)
	I1013 22:06:27.179572  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1013 22:06:27.179591  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1013 22:06:28.689212  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.509683444s)
	I1013 22:06:28.689242  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1013 22:06:28.689265  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1013 22:06:28.689318  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	W1013 22:06:29.323930  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:31.828298  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:29.931673  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.242329984s)
	I1013 22:06:29.931702  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1013 22:06:29.931724  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:06:29.931771  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:06:31.532711  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.600896663s)
	I1013 22:06:31.532734  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1013 22:06:31.532752  185484 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1013 22:06:31.532795  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1013 22:06:34.324992  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:34.823044  182330 pod_ready.go:94] pod "coredns-5dd5756b68-6k2fk" is "Ready"
	I1013 22:06:34.823120  182330 pod_ready.go:86] duration metric: took 31.006201286s for pod "coredns-5dd5756b68-6k2fk" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.826551  182330 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.831619  182330 pod_ready.go:94] pod "etcd-old-k8s-version-061725" is "Ready"
	I1013 22:06:34.831690  182330 pod_ready.go:86] duration metric: took 5.071709ms for pod "etcd-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.835164  182330 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.852307  182330 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-061725" is "Ready"
	I1013 22:06:34.852382  182330 pod_ready.go:86] duration metric: took 17.146237ms for pod "kube-apiserver-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.859753  182330 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.021434  182330 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-061725" is "Ready"
	I1013 22:06:35.021464  182330 pod_ready.go:86] duration metric: took 161.626932ms for pod "kube-controller-manager-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.222079  182330 pod_ready.go:83] waiting for pod "kube-proxy-kglxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.621778  182330 pod_ready.go:94] pod "kube-proxy-kglxn" is "Ready"
	I1013 22:06:35.621806  182330 pod_ready.go:86] duration metric: took 399.699303ms for pod "kube-proxy-kglxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.821888  182330 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:36.221430  182330 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-061725" is "Ready"
	I1013 22:06:36.221448  182330 pod_ready.go:86] duration metric: took 399.53248ms for pod "kube-scheduler-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:36.221459  182330 pod_ready.go:40] duration metric: took 32.411347139s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:06:36.374885  182330 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1013 22:06:36.378793  182330 out.go:203] 
	W1013 22:06:36.380312  182330 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 22:06:36.381532  182330 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 22:06:36.382635  182330 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-061725" cluster and "default" namespace by default
	I1013 22:06:35.524299  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.991483418s)
	I1013 22:06:35.524322  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1013 22:06:35.524343  185484 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:06:35.524390  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:06:36.136024  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1013 22:06:36.136060  185484 cache_images.go:124] Successfully loaded all cached images
	I1013 22:06:36.136067  185484 cache_images.go:93] duration metric: took 16.5607129s to LoadCachedImages
	I1013 22:06:36.136078  185484 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 22:06:36.136175  185484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-998398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:06:36.136257  185484 ssh_runner.go:195] Run: crio config
	I1013 22:06:36.199578  185484 cni.go:84] Creating CNI manager for ""
	I1013 22:06:36.199598  185484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:06:36.199614  185484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:06:36.199638  185484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-998398 NodeName:no-preload-998398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:06:36.199756  185484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-998398"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:06:36.199874  185484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:06:36.209437  185484 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1013 22:06:36.209498  185484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1013 22:06:36.220885  185484 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1013 22:06:36.220975  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1013 22:06:36.221113  185484 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1013 22:06:36.221473  185484 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1013 22:06:36.228683  185484 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1013 22:06:36.228717  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1013 22:06:37.303565  185484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:06:37.320253  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1013 22:06:37.327379  185484 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1013 22:06:37.327413  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1013 22:06:37.617399  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1013 22:06:37.625057  185484 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1013 22:06:37.625096  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1013 22:06:38.072962  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:06:38.082137  185484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:06:38.097335  185484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:06:38.112033  185484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1013 22:06:38.126251  185484 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:06:38.130104  185484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:06:38.143483  185484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:06:38.265765  185484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:06:38.288322  185484 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398 for IP: 192.168.76.2
	I1013 22:06:38.288344  185484 certs.go:195] generating shared ca certs ...
	I1013 22:06:38.288359  185484 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:38.288492  185484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:06:38.288538  185484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:06:38.288549  185484 certs.go:257] generating profile certs ...
	I1013 22:06:38.288601  185484 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.key
	I1013 22:06:38.288615  185484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt with IP's: []
	I1013 22:06:40.000306  185484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt ...
	I1013 22:06:40.000338  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: {Name:mkeeac7154126727aaa3fed8ddd7c6410061a558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.000586  185484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.key ...
	I1013 22:06:40.000601  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.key: {Name:mk8020e2d1d365cc6938cc134265e55a752ba5b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.000716  185484 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21
	I1013 22:06:40.000737  185484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:06:40.862558  185484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21 ...
	I1013 22:06:40.862585  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21: {Name:mkd14ff7084c687b0894fed6c1b3fbde1f74b743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.862766  185484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21 ...
	I1013 22:06:40.862782  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21: {Name:mk4bdeb8b712caf11512d0b8bccb7569786d821e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.862874  185484 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt
	I1013 22:06:40.862964  185484 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key
	I1013 22:06:40.863030  185484 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key
	I1013 22:06:40.863052  185484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt with IP's: []
	I1013 22:06:41.065939  185484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt ...
	I1013 22:06:41.065974  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt: {Name:mkcd829b0fe83fc581e6955c4c4ff1c754801bf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:41.066156  185484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key ...
	I1013 22:06:41.066169  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key: {Name:mk6bc00de88e5c0cd7a912da42edab83af61ee15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:41.066360  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:06:41.066403  185484 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:06:41.066417  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:06:41.066443  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:06:41.066468  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:06:41.066492  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:06:41.066538  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:06:41.067102  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:06:41.086893  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:06:41.106422  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:06:41.126126  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:06:41.144517  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:06:41.162338  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:06:41.179642  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:06:41.199333  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:06:41.219143  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:06:41.241761  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:06:41.260608  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:06:41.278999  185484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:06:41.292652  185484 ssh_runner.go:195] Run: openssl version
	I1013 22:06:41.301914  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:06:41.311436  185484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:06:41.315315  185484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:06:41.315409  185484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:06:41.359375  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:06:41.368364  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:06:41.379436  185484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:06:41.383746  185484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:06:41.383901  185484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:06:41.426472  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:06:41.435891  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:06:41.447656  185484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:06:41.453728  185484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:06:41.453794  185484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:06:41.504747  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:06:41.515434  185484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:06:41.520216  185484 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:06:41.520269  185484 kubeadm.go:400] StartCluster: {Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:06:41.520353  185484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:06:41.520411  185484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:06:41.553697  185484 cri.go:89] found id: ""
	I1013 22:06:41.553784  185484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:06:41.563307  185484 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:06:41.571767  185484 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:06:41.571874  185484 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:06:41.580884  185484 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:06:41.580915  185484 kubeadm.go:157] found existing configuration files:
	
	I1013 22:06:41.580987  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:06:41.589061  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:06:41.589126  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:06:41.597544  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:06:41.605665  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:06:41.605756  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:06:41.613689  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:06:41.622138  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:06:41.622224  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:06:41.630446  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:06:41.638774  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:06:41.638884  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:06:41.647451  185484 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:06:41.690644  185484 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:06:41.690979  185484 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:06:41.718028  185484 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:06:41.718109  185484 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:06:41.718154  185484 kubeadm.go:318] OS: Linux
	I1013 22:06:41.718205  185484 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:06:41.718260  185484 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:06:41.718314  185484 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:06:41.718369  185484 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:06:41.718422  185484 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:06:41.718476  185484 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:06:41.718527  185484 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:06:41.718580  185484 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:06:41.718632  185484 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:06:41.788499  185484 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:06:41.788647  185484 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:06:41.788772  185484 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:06:41.803997  185484 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:06:41.807390  185484 out.go:252]   - Generating certificates and keys ...
	I1013 22:06:41.807504  185484 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:06:41.807584  185484 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:06:41.967700  185484 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:06:43.250803  185484 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:06:43.610801  185484 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:06:43.891744  185484 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:06:44.008886  185484 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:06:44.009039  185484 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-998398] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:06:44.976794  185484 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:06:44.977292  185484 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-998398] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:06:45.270130  185484 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:06:46.221956  185484 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:06:46.665637  185484 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:06:46.665975  185484 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:06:46.958886  185484 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:06:47.022526  185484 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:06:48.870921  185484 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:06:49.466853  185484 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:06:50.290469  185484 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:06:50.291203  185484 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:06:50.294027  185484 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.865831102Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.876067252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.87624694Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.876337464Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.892076689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.89225801Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.892334225Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.904083347Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.904254887Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.904351927Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.90734267Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.90747614Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.573755012Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d1e02636-95b7-407f-9024-cfc580d33ef5 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.574973533Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=32ffacee-d709-429c-b5f1-f480c55d2371 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.576738921Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper" id=90d40e2b-2982-4199-81eb-4c69247d0c1d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.576955227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.590454261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.597710917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.642253982Z" level=info msg="Created container 99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper" id=90d40e2b-2982-4199-81eb-4c69247d0c1d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.64453244Z" level=info msg="Starting container: 99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5" id=213243e0-cf3e-4a53-885b-2dfcb1b84e90 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.64725325Z" level=info msg="Started container" PID=1692 containerID=99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper id=213243e0-cf3e-4a53-885b-2dfcb1b84e90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1
	Oct 13 22:06:44 old-k8s-version-061725 conmon[1690]: conmon 99688e937edfd2a11427 <ninfo>: container 1692 exited with status 1
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.948585129Z" level=info msg="Removing container: 6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f" id=c37b9402-334b-46ef-a170-d8f67c17f0ce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.962994738Z" level=info msg="Error loading conmon cgroup of container 6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f: cgroup deleted" id=c37b9402-334b-46ef-a170-d8f67c17f0ce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.980387574Z" level=info msg="Removed container 6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper" id=c37b9402-334b-46ef-a170-d8f67c17f0ce name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	99688e937edfd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   2                   b769fa712b70b       dashboard-metrics-scraper-5f989dc9cf-mxmft       kubernetes-dashboard
	f4214d686cc55       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   b6333766e944b       storage-provisioner                              kube-system
	bee272b4edb8b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   7e175e6395e27       kubernetes-dashboard-8694d4445c-6zgml            kubernetes-dashboard
	c92c35e7aab94       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   f031ad9e0a8c0       busybox                                          default
	21645635ce14d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   b6333766e944b       storage-provisioner                              kube-system
	b5cfe60fee50a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago       Running             kube-proxy                  1                   2f775aadc28c9       kube-proxy-kglxn                                 kube-system
	cb2cbcad768d1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   c90b02a034846       kindnet-8j8n7                                    kube-system
	c648129a3253e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago       Running             coredns                     1                   8cebe87299770       coredns-5dd5756b68-6k2fk                         kube-system
	6eed8544403e9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   8e948987a34df       etcd-old-k8s-version-061725                      kube-system
	b8caee63181a7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   ba305c44e4c43       kube-scheduler-old-k8s-version-061725            kube-system
	7b9a569532bd5       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   719ecbfdc9972       kube-apiserver-old-k8s-version-061725            kube-system
	ea02ef13f9182       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   d618c304f120c       kube-controller-manager-old-k8s-version-061725   kube-system
	
	
	==> coredns [c648129a3253eaa1e9d7547c6256957ad5a93b39cb7716180e8208547ea6cdcc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46315 - 52769 "HINFO IN 1256378189370812.3910097882534820810. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.035256721s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-061725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-061725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=old-k8s-version-061725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_04_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-061725
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:06:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:05:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-061725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b073986d8aa045508ab17637852ae6ea
	  System UUID:                a4ee82dc-aa4f-4d44-9281-73541a0cdcab
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-6k2fk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-061725                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-8j8n7                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-061725             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-061725    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-kglxn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-061725             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-mxmft        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-6zgml             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 108s                   kube-proxy       
	  Normal  Starting                 50s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           110s                   node-controller  Node old-k8s-version-061725 event: Registered Node old-k8s-version-061725 in Controller
	  Normal  NodeReady                95s                    kubelet          Node old-k8s-version-061725 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                    node-controller  Node old-k8s-version-061725 event: Registered Node old-k8s-version-061725 in Controller
	
	
	==> dmesg <==
	[Oct13 21:36] overlayfs: idmapped layers are currently not supported
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6eed8544403e99d9185f9c6d9e6d28a7fdd3896c087aeec1df5be870a03bbce0] <==
	{"level":"info","ts":"2025-10-13T22:05:52.28818Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-13T22:05:52.288282Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:05:52.288324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:05:52.324834Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:05:52.324963Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:05:52.324998Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:05:52.32928Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T22:05:52.337666Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T22:05:52.337699Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T22:05:52.337855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T22:05:52.337878Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T22:05:53.883817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T22:05:53.883938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T22:05:53.883987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T22:05:53.884025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.884059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.884115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.884148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.892049Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-061725 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T22:05:53.89215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:05:53.893236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T22:05:53.892172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:05:53.900845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-13T22:05:53.915892Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T22:05:53.91599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:06:52 up  1:49,  0 user,  load average: 5.12, 2.65, 2.10
	Linux old-k8s-version-061725 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb2cbcad768d11f9ca1e964c26de2e6f7d02f101ebb55b59faa4319adff9e6db] <==
	I1013 22:06:00.625809       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:06:00.628622       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:06:00.628768       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:06:00.628781       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:06:00.628796       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:06:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:06:00.860262       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:06:00.860353       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:06:00.860387       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:06:00.861112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:06:30.861322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:06:30.861434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:06:30.861528       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 22:06:30.861623       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 22:06:32.262492       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:06:32.262582       1 metrics.go:72] Registering metrics
	I1013 22:06:32.262661       1 controller.go:711] "Syncing nftables rules"
	I1013 22:06:40.864817       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:06:40.865034       1 main.go:301] handling current node
	I1013 22:06:50.864731       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:06:50.864773       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b9a569532bd578665c2febc0e862c8b3dfa6aa451acd0888258fd6f6bd613b9] <==
	I1013 22:05:59.311972       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:05:59.313434       1 trace.go:236] Trace[1902423577]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b0ae7f86-16b3-49c5-b902-4395a954ccba,client:192.168.85.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/old-k8s-version-061725,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:GET (13-Oct-2025 22:05:58.740) (total time: 572ms):
	Trace[1902423577]: ---"About to write a response" 571ms (22:05:59.312)
	Trace[1902423577]: [572.882896ms] [572.882896ms] END
	I1013 22:05:59.468688       1 trace.go:236] Trace[1540345707]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d9b0f51f-7746-476e-ac3e-f467bf48b66d,client:192.168.85.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (13-Oct-2025 22:05:58.776) (total time: 692ms):
	Trace[1540345707]: ---"limitedReadBody succeeded" len:4139 26ms (22:05:58.802)
	Trace[1540345707]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-061725" already exists 181ms (22:05:59.465)
	Trace[1540345707]: [692.414873ms] [692.414873ms] END
	I1013 22:05:59.493612       1 trace.go:236] Trace[2076288680]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9e115732-69d3-4410-884f-418fbea1955a,client:192.168.85.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (13-Oct-2025 22:05:58.715) (total time: 777ms):
	Trace[2076288680]: [777.672185ms] [777.672185ms] END
	I1013 22:05:59.567403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:05:59.734887       1 trace.go:236] Trace[466923378]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9a8e6494-efb9-4c64-aed2-1d7312304c7e,client:192.168.85.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (13-Oct-2025 22:05:59.201) (total time: 533ms):
	Trace[466923378]: ---"Write to database call failed" len:2175,err:pods "etcd-old-k8s-version-061725" already exists 113ms (22:05:59.734)
	Trace[466923378]: [533.490573ms] [533.490573ms] END
	E1013 22:05:59.766501       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:06:03.421164       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 22:06:03.467064       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 22:06:03.498670       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:06:03.516575       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:06:03.535594       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 22:06:03.622461       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.132.29"}
	I1013 22:06:03.695999       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.41.254"}
	I1013 22:06:13.304351       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:06:13.322744       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 22:06:13.442821       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ea02ef13f9182b9021218457ea3dab09ac4d242a483e773331138defe8ef3896] <==
	I1013 22:06:13.417748       1 shared_informer.go:318] Caches are synced for resource quota
	I1013 22:06:13.453822       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1013 22:06:13.453850       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1013 22:06:13.469979       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-mxmft"
	I1013 22:06:13.471321       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-6zgml"
	I1013 22:06:13.479930       1 shared_informer.go:318] Caches are synced for HPA
	I1013 22:06:13.493284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.498008ms"
	I1013 22:06:13.504687       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.455946ms"
	I1013 22:06:13.525431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.149823ms"
	I1013 22:06:13.525574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.037µs"
	I1013 22:06:13.531201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.503µs"
	I1013 22:06:13.540850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.404043ms"
	I1013 22:06:13.540939       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.309µs"
	I1013 22:06:13.545055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.143µs"
	I1013 22:06:13.858823       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:06:13.895882       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:06:13.895912       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 22:06:19.898974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.143347ms"
	I1013 22:06:19.899978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.194µs"
	I1013 22:06:26.912348       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.137µs"
	I1013 22:06:27.925411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.168µs"
	I1013 22:06:28.920033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.948µs"
	I1013 22:06:34.451771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.985455ms"
	I1013 22:06:34.453460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.66µs"
	I1013 22:06:44.967596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.742µs"
	
	
	==> kube-proxy [b5cfe60fee50aca9adeb9a7210f96baa84cf5ff86310fb648ab48513f8990dd9] <==
	I1013 22:06:01.762315       1 server_others.go:69] "Using iptables proxy"
	I1013 22:06:01.880253       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1013 22:06:02.046556       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:06:02.070225       1 server_others.go:152] "Using iptables Proxier"
	I1013 22:06:02.070266       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 22:06:02.070274       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 22:06:02.070307       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 22:06:02.070525       1 server.go:846] "Version info" version="v1.28.0"
	I1013 22:06:02.070535       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:06:02.071471       1 config.go:188] "Starting service config controller"
	I1013 22:06:02.071504       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 22:06:02.071524       1 config.go:97] "Starting endpoint slice config controller"
	I1013 22:06:02.071527       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 22:06:02.072028       1 config.go:315] "Starting node config controller"
	I1013 22:06:02.072035       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 22:06:02.172087       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1013 22:06:02.172144       1 shared_informer.go:318] Caches are synced for service config
	I1013 22:06:02.176472       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8caee63181a735c804ef9eb3da1040c9ec20a7c106dec4be1a1e2979c1008be] <==
	I1013 22:05:55.030232       1 serving.go:348] Generated self-signed cert in-memory
	W1013 22:05:58.840195       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:05:58.840310       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 22:05:58.840343       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:05:58.840384       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:05:59.461817       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1013 22:05:59.461922       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:05:59.463591       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:05:59.463673       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 22:05:59.476455       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1013 22:05:59.476552       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1013 22:05:59.565538       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.495311     774 topology_manager.go:215] "Topology Admit Handler" podUID="085d0596-5060-49cb-ada7-51da9c251ab8" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-mxmft"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.585515     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b62057b0-535c-46d1-87a0-f7e573c4b455-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-6zgml\" (UID: \"b62057b0-535c-46d1-87a0-f7e573c4b455\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6zgml"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.585809     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxh48\" (UniqueName: \"kubernetes.io/projected/b62057b0-535c-46d1-87a0-f7e573c4b455-kube-api-access-mxh48\") pod \"kubernetes-dashboard-8694d4445c-6zgml\" (UID: \"b62057b0-535c-46d1-87a0-f7e573c4b455\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6zgml"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.686044     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkr45\" (UniqueName: \"kubernetes.io/projected/085d0596-5060-49cb-ada7-51da9c251ab8-kube-api-access-wkr45\") pod \"dashboard-metrics-scraper-5f989dc9cf-mxmft\" (UID: \"085d0596-5060-49cb-ada7-51da9c251ab8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.686136     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/085d0596-5060-49cb-ada7-51da9c251ab8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-mxmft\" (UID: \"085d0596-5060-49cb-ada7-51da9c251ab8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: W1013 22:06:13.817096     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/crio-7e175e6395e27db44d204131ed09021ed6b4ceac32fc6aba9febf56309f9382c WatchSource:0}: Error finding container 7e175e6395e27db44d204131ed09021ed6b4ceac32fc6aba9febf56309f9382c: Status 404 returned error can't find the container with id 7e175e6395e27db44d204131ed09021ed6b4ceac32fc6aba9febf56309f9382c
	Oct 13 22:06:14 old-k8s-version-061725 kubelet[774]: W1013 22:06:14.113848     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/crio-b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1 WatchSource:0}: Error finding container b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1: Status 404 returned error can't find the container with id b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1
	Oct 13 22:06:26 old-k8s-version-061725 kubelet[774]: I1013 22:06:26.883892     774 scope.go:117] "RemoveContainer" containerID="e86545af98ecb6b492019efe9708d42d08126c3d1e8495cc8b74fa443e3ccee3"
	Oct 13 22:06:26 old-k8s-version-061725 kubelet[774]: I1013 22:06:26.908821     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6zgml" podStartSLOduration=8.02134172 podCreationTimestamp="2025-10-13 22:06:13 +0000 UTC" firstStartedPulling="2025-10-13 22:06:13.823335707 +0000 UTC m=+22.386609854" lastFinishedPulling="2025-10-13 22:06:19.710753 +0000 UTC m=+28.274027122" observedRunningTime="2025-10-13 22:06:19.8749169 +0000 UTC m=+28.438191021" watchObservedRunningTime="2025-10-13 22:06:26.908758988 +0000 UTC m=+35.472033118"
	Oct 13 22:06:27 old-k8s-version-061725 kubelet[774]: I1013 22:06:27.888261     774 scope.go:117] "RemoveContainer" containerID="e86545af98ecb6b492019efe9708d42d08126c3d1e8495cc8b74fa443e3ccee3"
	Oct 13 22:06:27 old-k8s-version-061725 kubelet[774]: I1013 22:06:27.888613     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:27 old-k8s-version-061725 kubelet[774]: E1013 22:06:27.889248     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:28 old-k8s-version-061725 kubelet[774]: I1013 22:06:28.892608     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:28 old-k8s-version-061725 kubelet[774]: E1013 22:06:28.893433     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:31 old-k8s-version-061725 kubelet[774]: I1013 22:06:31.902245     774 scope.go:117] "RemoveContainer" containerID="21645635ce14d06941230e4e6235b9280959f93614831d10abae4cb0b70f1236"
	Oct 13 22:06:34 old-k8s-version-061725 kubelet[774]: I1013 22:06:34.099077     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:34 old-k8s-version-061725 kubelet[774]: E1013 22:06:34.100020     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: I1013 22:06:44.573092     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: I1013 22:06:44.936447     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: I1013 22:06:44.940603     774 scope.go:117] "RemoveContainer" containerID="99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: E1013 22:06:44.941106     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:49 old-k8s-version-061725 kubelet[774]: I1013 22:06:49.302824     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 22:06:49 old-k8s-version-061725 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:06:49 old-k8s-version-061725 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:06:49 old-k8s-version-061725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bee272b4edb8bfa59232efb28167b53a742045a704a78d5cb04dab0c16c607ad] <==
	2025/10/13 22:06:19 Using namespace: kubernetes-dashboard
	2025/10/13 22:06:19 Using in-cluster config to connect to apiserver
	2025/10/13 22:06:19 Using secret token for csrf signing
	2025/10/13 22:06:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:06:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:06:19 Successful initial request to the apiserver, version: v1.28.0
	2025/10/13 22:06:19 Generating JWE encryption key
	2025/10/13 22:06:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:06:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:06:23 Initializing JWE encryption key from synchronized object
	2025/10/13 22:06:23 Creating in-cluster Sidecar client
	2025/10/13 22:06:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:06:23 Serving insecurely on HTTP port: 9090
	2025/10/13 22:06:19 Starting overwatch
	
	
	==> storage-provisioner [21645635ce14d06941230e4e6235b9280959f93614831d10abae4cb0b70f1236] <==
	I1013 22:06:00.969101       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:06:30.970777       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f4214d686cc551d918ce7ab3ebc086aed6b9ef041c9d4f95ae3e52094f9f8fe4] <==
	I1013 22:06:32.044032       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:06:32.068933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:06:32.069057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 22:06:49.568204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:06:49.568432       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061725_db14fc1b-5dcb-46f3-a6b6-40d90b3aa07d!
	I1013 22:06:49.578367       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7559b29-90e7-44b0-9ce8-e3c256861aa5", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-061725_db14fc1b-5dcb-46f3-a6b6-40d90b3aa07d became leader
	I1013 22:06:49.669531       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061725_db14fc1b-5dcb-46f3-a6b6-40d90b3aa07d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-061725 -n old-k8s-version-061725
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-061725 -n old-k8s-version-061725: exit status 2 (564.553703ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-061725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-061725
helpers_test.go:243: (dbg) docker inspect old-k8s-version-061725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041",
	        "Created": "2025-10-13T22:04:24.643297678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:05:44.548725553Z",
	            "FinishedAt": "2025-10-13T22:05:43.809166371Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/hosts",
	        "LogPath": "/var/lib/docker/containers/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041-json.log",
	        "Name": "/old-k8s-version-061725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-061725:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-061725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041",
	                "LowerDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d87821f609fa965d573bd1d67dbfade9ad46250a90bd0a64282669cf2490b2b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-061725",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-061725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-061725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-061725",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-061725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "caa917960a47798ece5d82fa14160d4e7e25e9c246505f70866fbc11a26c40a2",
	            "SandboxKey": "/var/run/docker/netns/caa917960a47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-061725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:8a:ab:96:0f:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "342c36a433557ef5e18c5eb6a5e2eade730d4334bff3f113c0f457eda67e9161",
	                    "EndpointID": "a4f4bbf924933ac83086927e6fb72694bbc6a68ba52cbcc18e4d5cd8b338f5f5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-061725",
	                        "9b67329f891f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725: exit status 2 (546.606001ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-061725 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-061725 logs -n 25: (1.752918968s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-122822 sudo containerd config dump                                                                                                                                                                                                  │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ -p cilium-122822 sudo crio config                                                                                                                                                                                                             │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ delete  │ -p cilium-122822                                                                                                                                                                                                                              │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │ 13 Oct 25 21:55 UTC │
	│ start   │ -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ force-systemd-flag-257205 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-flag-257205                                                                                                                                                                                                                  │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-env-312094                                                                                                                                                                                                                   │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-options-194931 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ cert-options-194931 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p cert-options-194931 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-options-194931                                                                                                                                                                                                                        │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ stop    │ -p old-k8s-version-061725 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-061725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:06:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:06:09.758062  185484 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:06:09.758231  185484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:06:09.758259  185484 out.go:374] Setting ErrFile to fd 2...
	I1013 22:06:09.758279  185484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:06:09.758554  185484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:06:09.758991  185484 out.go:368] Setting JSON to false
	I1013 22:06:09.760007  185484 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6504,"bootTime":1760386666,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:06:09.760111  185484 start.go:141] virtualization:  
	I1013 22:06:09.763856  185484 out.go:179] * [no-preload-998398] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:06:09.767865  185484 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:06:09.767947  185484 notify.go:220] Checking for updates...
	I1013 22:06:09.774318  185484 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:06:09.777374  185484 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:06:09.780300  185484 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:06:09.783241  185484 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:06:09.786813  185484 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:06:09.790410  185484 config.go:182] Loaded profile config "old-k8s-version-061725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1013 22:06:09.790511  185484 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:06:09.831217  185484 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:06:09.832797  185484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:06:09.888338  185484 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:06:09.87958939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:06:09.888435  185484 docker.go:318] overlay module found
	I1013 22:06:09.891745  185484 out.go:179] * Using the docker driver based on user configuration
	I1013 22:06:09.894691  185484 start.go:305] selected driver: docker
	I1013 22:06:09.894714  185484 start.go:925] validating driver "docker" against <nil>
	I1013 22:06:09.894727  185484 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:06:09.895428  185484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:06:09.949700  185484 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:06:09.940288281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:06:09.949857  185484 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:06:09.950087  185484 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:06:09.953110  185484 out.go:179] * Using Docker driver with root privileges
	I1013 22:06:09.955987  185484 cni.go:84] Creating CNI manager for ""
	I1013 22:06:09.956056  185484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:06:09.956076  185484 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:06:09.956164  185484 start.go:349] cluster config:
	{Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:06:09.959236  185484 out.go:179] * Starting "no-preload-998398" primary control-plane node in "no-preload-998398" cluster
	I1013 22:06:09.962141  185484 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:06:09.965143  185484 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:06:09.967863  185484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:06:09.967936  185484 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:06:09.968001  185484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json ...
	I1013 22:06:09.968030  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json: {Name:mk22b74861007882575fd7cb7615d8974646132e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:09.968282  185484 cache.go:107] acquiring lock: {Name:mk9e23294529848fca5421602e65fa540d2ffe9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.969094  185484 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1013 22:06:09.969116  185484 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 843.54µs
	I1013 22:06:09.969131  185484 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1013 22:06:09.969148  185484 cache.go:107] acquiring lock: {Name:mkb3086799a14ff1ebfc52e9ac9fba7b29bb30fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.970007  185484 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:09.970369  185484 cache.go:107] acquiring lock: {Name:mkee07c6d8760320632919489ff1ecb2e0d22d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.970490  185484 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:09.970758  185484 cache.go:107] acquiring lock: {Name:mkfa4d23a7d0256f3cdf1cb2f33382ba7dbbfc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.970872  185484 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:09.971141  185484 cache.go:107] acquiring lock: {Name:mk62ce0678b4b3038f2e150b1ed151bc360f3641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.971244  185484 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:09.971455  185484 cache.go:107] acquiring lock: {Name:mkb1d39c539d858c9b1c08f39ea3287bd6d91313 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.971560  185484 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1013 22:06:09.971758  185484 cache.go:107] acquiring lock: {Name:mk2eb24896c7f2889da7dd223ade65489103932b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.971988  185484 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:09.972231  185484 cache.go:107] acquiring lock: {Name:mkc9af2ce906bde484aa6a725326e8aa7fddb608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.972408  185484 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:09.973758  185484 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:09.974272  185484 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:09.974508  185484 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:09.974662  185484 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1013 22:06:09.974788  185484 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:09.974929  185484 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:09.975061  185484 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:09.995384  185484 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:06:09.995409  185484 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:06:09.995427  185484 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:06:09.995449  185484 start.go:360] acquireMachinesLock for no-preload-998398: {Name:mk31dc6d65eb1bd4951f5e4881803fab3fbc7962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:06:09.995563  185484 start.go:364] duration metric: took 95.071µs to acquireMachinesLock for "no-preload-998398"
	I1013 22:06:09.995598  185484 start.go:93] Provisioning new machine with config: &{Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:06:09.995671  185484 start.go:125] createHost starting for "" (driver="docker")
	W1013 22:06:10.323066  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:12.329203  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:09.999369  185484 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:06:09.999602  185484 start.go:159] libmachine.API.Create for "no-preload-998398" (driver="docker")
	I1013 22:06:09.999710  185484 client.go:168] LocalClient.Create starting
	I1013 22:06:09.999826  185484 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:06:09.999872  185484 main.go:141] libmachine: Decoding PEM data...
	I1013 22:06:09.999890  185484 main.go:141] libmachine: Parsing certificate...
	I1013 22:06:09.999942  185484 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:06:09.999966  185484 main.go:141] libmachine: Decoding PEM data...
	I1013 22:06:09.999979  185484 main.go:141] libmachine: Parsing certificate...
	I1013 22:06:10.000409  185484 cli_runner.go:164] Run: docker network inspect no-preload-998398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:06:10.031153  185484 cli_runner.go:211] docker network inspect no-preload-998398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:06:10.031268  185484 network_create.go:284] running [docker network inspect no-preload-998398] to gather additional debugging logs...
	I1013 22:06:10.031292  185484 cli_runner.go:164] Run: docker network inspect no-preload-998398
	W1013 22:06:10.049772  185484 cli_runner.go:211] docker network inspect no-preload-998398 returned with exit code 1
	I1013 22:06:10.049842  185484 network_create.go:287] error running [docker network inspect no-preload-998398]: docker network inspect no-preload-998398: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-998398 not found
	I1013 22:06:10.049856  185484 network_create.go:289] output of [docker network inspect no-preload-998398]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-998398 not found
	
	** /stderr **
	I1013 22:06:10.049954  185484 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:06:10.068330  185484 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:06:10.068678  185484 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:06:10.069019  185484 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:06:10.069529  185484 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bbb9a0}
	I1013 22:06:10.069555  185484 network_create.go:124] attempt to create docker network no-preload-998398 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:06:10.069622  185484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-998398 no-preload-998398
	I1013 22:06:10.157217  185484 network_create.go:108] docker network no-preload-998398 192.168.76.0/24 created
	I1013 22:06:10.157255  185484 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-998398" container
	I1013 22:06:10.157358  185484 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:06:10.178661  185484 cli_runner.go:164] Run: docker volume create no-preload-998398 --label name.minikube.sigs.k8s.io=no-preload-998398 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:06:10.199922  185484 oci.go:103] Successfully created a docker volume no-preload-998398
	I1013 22:06:10.199996  185484 cli_runner.go:164] Run: docker run --rm --name no-preload-998398-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-998398 --entrypoint /usr/bin/test -v no-preload-998398:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:06:10.295862  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1013 22:06:10.314504  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1013 22:06:10.323489  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1013 22:06:10.332060  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1013 22:06:10.335213  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1013 22:06:10.344102  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1013 22:06:10.365637  185484 cache.go:162] opening:  /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1013 22:06:10.386385  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1013 22:06:10.386460  185484 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 415.010761ms
	I1013 22:06:10.386493  185484 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1013 22:06:10.796060  185484 oci.go:107] Successfully prepared a docker volume no-preload-998398
	I1013 22:06:10.796089  185484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1013 22:06:10.796262  185484 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:06:10.796412  185484 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:06:10.872669  185484 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-998398 --name no-preload-998398 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-998398 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-998398 --network no-preload-998398 --ip 192.168.76.2 --volume no-preload-998398:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:06:10.961708  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1013 22:06:10.961772  185484 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 990.633677ms
	I1013 22:06:10.961798  185484 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1013 22:06:11.288893  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Running}}
	I1013 22:06:11.350544  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:06:11.375222  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1013 22:06:11.375251  185484 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.404496536s
	I1013 22:06:11.375264  185484 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1013 22:06:11.417761  185484 cli_runner.go:164] Run: docker exec no-preload-998398 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:06:11.497162  185484 oci.go:144] the created container "no-preload-998398" has a running status.
	I1013 22:06:11.497683  185484 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa...
	I1013 22:06:11.497222  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1013 22:06:11.497747  185484 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.525516694s
	I1013 22:06:11.497759  185484 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1013 22:06:11.497303  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1013 22:06:11.497771  185484 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.527408799s
	I1013 22:06:11.497777  185484 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1013 22:06:11.543005  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1013 22:06:11.543039  185484 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.573893029s
	I1013 22:06:11.543050  185484 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1013 22:06:11.965730  185484 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:06:12.005648  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:06:12.026927  185484 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:06:12.026953  185484 kic_runner.go:114] Args: [docker exec --privileged no-preload-998398 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:06:12.100614  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:06:12.129749  185484 machine.go:93] provisionDockerMachine start ...
	I1013 22:06:12.129836  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:12.157900  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:12.158223  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:12.158233  185484 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:06:12.160148  185484 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:06:13.342097  185484 cache.go:157] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1013 22:06:13.342172  185484 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 3.370415876s
	I1013 22:06:13.342198  185484 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1013 22:06:13.342220  185484 cache.go:87] Successfully saved all images to host disk.
	W1013 22:06:14.823822  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:17.329790  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:15.327455  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-998398
	
	I1013 22:06:15.327480  185484 ubuntu.go:182] provisioning hostname "no-preload-998398"
	I1013 22:06:15.327545  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:15.352232  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:15.352553  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:15.352572  185484 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-998398 && echo "no-preload-998398" | sudo tee /etc/hostname
	I1013 22:06:15.520453  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-998398
	
	I1013 22:06:15.520549  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:15.543385  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:15.543687  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:15.543706  185484 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-998398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-998398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-998398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:06:15.703892  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:06:15.703935  185484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:06:15.703956  185484 ubuntu.go:190] setting up certificates
	I1013 22:06:15.703965  185484 provision.go:84] configureAuth start
	I1013 22:06:15.704029  185484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:06:15.725148  185484 provision.go:143] copyHostCerts
	I1013 22:06:15.725210  185484 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:06:15.725221  185484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:06:15.725292  185484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:06:15.725379  185484 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:06:15.725384  185484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:06:15.725409  185484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:06:15.725458  185484 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:06:15.725462  185484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:06:15.725485  185484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:06:15.725528  185484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.no-preload-998398 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-998398]
	I1013 22:06:16.458063  185484 provision.go:177] copyRemoteCerts
	I1013 22:06:16.458124  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:06:16.458161  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:16.477254  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:16.580357  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:06:16.616465  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:06:16.637814  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:06:16.658512  185484 provision.go:87] duration metric: took 954.528885ms to configureAuth
	I1013 22:06:16.658584  185484 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:06:16.658813  185484 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:06:16.658965  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:16.679764  185484 main.go:141] libmachine: Using SSH client type: native
	I1013 22:06:16.680198  185484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1013 22:06:16.680231  185484 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:06:16.977118  185484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:06:16.977186  185484 machine.go:96] duration metric: took 4.847418213s to provisionDockerMachine
	I1013 22:06:16.977209  185484 client.go:171] duration metric: took 6.977486545s to LocalClient.Create
	I1013 22:06:16.977238  185484 start.go:167] duration metric: took 6.977637228s to libmachine.API.Create "no-preload-998398"
	I1013 22:06:16.977278  185484 start.go:293] postStartSetup for "no-preload-998398" (driver="docker")
	I1013 22:06:16.977301  185484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:06:16.977386  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:06:16.977467  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.001009  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.110014  185484 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:06:17.113974  185484 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:06:17.113999  185484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:06:17.114010  185484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:06:17.114062  185484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:06:17.114140  185484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:06:17.114239  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:06:17.122511  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:06:17.151377  185484 start.go:296] duration metric: took 174.073215ms for postStartSetup
	I1013 22:06:17.151770  185484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:06:17.173968  185484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json ...
	I1013 22:06:17.174340  185484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:06:17.174393  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.208496  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.317004  185484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:06:17.324408  185484 start.go:128] duration metric: took 7.328723311s to createHost
	I1013 22:06:17.324434  185484 start.go:83] releasing machines lock for "no-preload-998398", held for 7.32886067s
	I1013 22:06:17.324506  185484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:06:17.342413  185484 ssh_runner.go:195] Run: cat /version.json
	I1013 22:06:17.342468  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.342547  185484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:06:17.342623  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:06:17.369274  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.381313  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:06:17.483741  185484 ssh_runner.go:195] Run: systemctl --version
	I1013 22:06:17.587243  185484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:06:17.636407  185484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:06:17.642720  185484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:06:17.642800  185484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:06:17.692641  185484 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:06:17.692675  185484 start.go:495] detecting cgroup driver to use...
	I1013 22:06:17.692706  185484 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:06:17.692770  185484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:06:17.715628  185484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:06:17.733378  185484 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:06:17.733448  185484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:06:17.758350  185484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:06:17.777215  185484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:06:17.934613  185484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:06:18.132728  185484 docker.go:234] disabling docker service ...
	I1013 22:06:18.132909  185484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:06:18.163507  185484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:06:18.179139  185484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:06:18.330408  185484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:06:18.503270  185484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:06:18.520076  185484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:06:18.537873  185484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:06:18.537987  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.548961  185484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:06:18.549068  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.560574  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.569695  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.583857  185484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:06:18.593382  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.602266  185484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.618383  185484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:06:18.627638  185484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:06:18.635911  185484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:06:18.644092  185484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:06:18.794803  185484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:06:19.370479  185484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:06:19.370568  185484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:06:19.379384  185484 start.go:563] Will wait 60s for crictl version
	I1013 22:06:19.379455  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:19.386387  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:06:19.419217  185484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:06:19.419307  185484 ssh_runner.go:195] Run: crio --version
	I1013 22:06:19.454804  185484 ssh_runner.go:195] Run: crio --version
	I1013 22:06:19.499577  185484 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:06:19.502560  185484 cli_runner.go:164] Run: docker network inspect no-preload-998398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:06:19.523531  185484 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:06:19.529674  185484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:06:19.539501  185484 kubeadm.go:883] updating cluster {Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:06:19.539621  185484 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:06:19.539661  185484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:06:19.575318  185484 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1013 22:06:19.575340  185484 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1013 22:06:19.575374  185484 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:19.575585  185484 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:19.575666  185484 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:19.575742  185484 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:19.575849  185484 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:19.575919  185484 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1013 22:06:19.575981  185484 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:19.576057  185484 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:19.578074  185484 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:19.578138  185484 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:19.578205  185484 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1013 22:06:19.578074  185484 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:19.578305  185484 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:19.578419  185484 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:19.578427  185484 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:19.578471  185484 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	W1013 22:06:19.843601  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:22.336817  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:19.798856  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1013 22:06:19.803945  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:19.807434  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:19.814206  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:19.819589  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:19.819751  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:19.851201  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:19.984442  185484 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1013 22:06:19.984572  185484 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1013 22:06:19.984647  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.047715  185484 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1013 22:06:20.047885  185484 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.047960  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.054015  185484 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1013 22:06:20.054131  185484 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.054208  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.078990  185484 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1013 22:06:20.079108  185484 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.079196  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.117370  185484 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1013 22:06:20.117456  185484 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.117529  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.117614  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:06:20.117681  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.117749  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.117809  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.117874  185484 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1013 22:06:20.118014  185484 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.118083  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.117914  185484 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1013 22:06:20.118166  185484 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.118241  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:20.258938  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.259042  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.259152  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:06:20.259239  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.259315  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.259394  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.259460  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.518926  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.519030  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.519117  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.519195  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1013 22:06:20.519270  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1013 22:06:20.519341  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1013 22:06:20.519412  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1013 22:06:20.732970  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1013 22:06:20.733090  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1013 22:06:20.733236  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1013 22:06:20.733286  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1013 22:06:20.733325  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1013 22:06:20.733237  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:06:20.733396  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1013 22:06:20.733428  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1013 22:06:20.733485  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:06:20.733518  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1013 22:06:20.733609  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1013 22:06:20.818158  185484 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1013 22:06:20.818389  185484 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:20.840491  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1013 22:06:20.840530  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1013 22:06:20.840609  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1013 22:06:20.840675  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1013 22:06:20.840933  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1013 22:06:20.840693  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1013 22:06:20.841025  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1013 22:06:20.840712  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1013 22:06:20.841088  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1013 22:06:20.841108  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:06:20.840731  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1013 22:06:20.841145  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1013 22:06:20.840629  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1013 22:06:20.841215  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1013 22:06:20.974243  185484 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1013 22:06:20.974611  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1013 22:06:21.041975  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1013 22:06:21.042015  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1013 22:06:21.042105  185484 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1013 22:06:21.042185  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1013 22:06:21.042204  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1013 22:06:21.042259  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1013 22:06:21.042273  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1013 22:06:21.042306  185484 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:21.042365  185484 ssh_runner.go:195] Run: which crictl
	I1013 22:06:21.591280  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:21.591363  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1013 22:06:21.777519  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:06:21.799343  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1013 22:06:21.799995  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1013 22:06:21.939266  185484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1013 22:06:24.823563  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:26.826619  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:24.762906  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.962877851s)
	I1013 22:06:24.762933  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1013 22:06:24.762952  185484 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:06:24.763009  185484 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.823717758s)
	I1013 22:06:24.763042  185484 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1013 22:06:24.763152  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:06:24.763255  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1013 22:06:27.179407  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.416111824s)
	I1013 22:06:27.179435  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1013 22:06:27.179452  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:06:27.179496  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1013 22:06:27.179553  185484 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.416374801s)
	I1013 22:06:27.179572  185484 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1013 22:06:27.179591  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1013 22:06:28.689212  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.509683444s)
	I1013 22:06:28.689242  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1013 22:06:28.689265  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1013 22:06:28.689318  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	W1013 22:06:29.323930  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	W1013 22:06:31.828298  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:29.931673  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.242329984s)
	I1013 22:06:29.931702  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1013 22:06:29.931724  185484 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:06:29.931771  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1013 22:06:31.532711  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.600896663s)
	I1013 22:06:31.532734  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1013 22:06:31.532752  185484 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1013 22:06:31.532795  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1013 22:06:34.324992  182330 pod_ready.go:104] pod "coredns-5dd5756b68-6k2fk" is not "Ready", error: <nil>
	I1013 22:06:34.823044  182330 pod_ready.go:94] pod "coredns-5dd5756b68-6k2fk" is "Ready"
	I1013 22:06:34.823120  182330 pod_ready.go:86] duration metric: took 31.006201286s for pod "coredns-5dd5756b68-6k2fk" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.826551  182330 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.831619  182330 pod_ready.go:94] pod "etcd-old-k8s-version-061725" is "Ready"
	I1013 22:06:34.831690  182330 pod_ready.go:86] duration metric: took 5.071709ms for pod "etcd-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.835164  182330 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.852307  182330 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-061725" is "Ready"
	I1013 22:06:34.852382  182330 pod_ready.go:86] duration metric: took 17.146237ms for pod "kube-apiserver-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:34.859753  182330 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.021434  182330 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-061725" is "Ready"
	I1013 22:06:35.021464  182330 pod_ready.go:86] duration metric: took 161.626932ms for pod "kube-controller-manager-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.222079  182330 pod_ready.go:83] waiting for pod "kube-proxy-kglxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.621778  182330 pod_ready.go:94] pod "kube-proxy-kglxn" is "Ready"
	I1013 22:06:35.621806  182330 pod_ready.go:86] duration metric: took 399.699303ms for pod "kube-proxy-kglxn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:35.821888  182330 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:36.221430  182330 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-061725" is "Ready"
	I1013 22:06:36.221448  182330 pod_ready.go:86] duration metric: took 399.53248ms for pod "kube-scheduler-old-k8s-version-061725" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:06:36.221459  182330 pod_ready.go:40] duration metric: took 32.411347139s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:06:36.374885  182330 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1013 22:06:36.378793  182330 out.go:203] 
	W1013 22:06:36.380312  182330 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1013 22:06:36.381532  182330 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1013 22:06:36.382635  182330 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-061725" cluster and "default" namespace by default
	I1013 22:06:35.524299  185484 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.991483418s)
	I1013 22:06:35.524322  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1013 22:06:35.524343  185484 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:06:35.524390  185484 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1013 22:06:36.136024  185484 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1013 22:06:36.136060  185484 cache_images.go:124] Successfully loaded all cached images
	I1013 22:06:36.136067  185484 cache_images.go:93] duration metric: took 16.5607129s to LoadCachedImages
	I1013 22:06:36.136078  185484 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 22:06:36.136175  185484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-998398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:06:36.136257  185484 ssh_runner.go:195] Run: crio config
	I1013 22:06:36.199578  185484 cni.go:84] Creating CNI manager for ""
	I1013 22:06:36.199598  185484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:06:36.199614  185484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:06:36.199638  185484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-998398 NodeName:no-preload-998398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:06:36.199756  185484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-998398"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:06:36.199874  185484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:06:36.209437  185484 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1013 22:06:36.209498  185484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1013 22:06:36.220885  185484 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1013 22:06:36.220975  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1013 22:06:36.221113  185484 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1013 22:06:36.221473  185484 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1013 22:06:36.228683  185484 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1013 22:06:36.228717  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1013 22:06:37.303565  185484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:06:37.320253  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1013 22:06:37.327379  185484 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1013 22:06:37.327413  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1013 22:06:37.617399  185484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1013 22:06:37.625057  185484 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1013 22:06:37.625096  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1013 22:06:38.072962  185484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:06:38.082137  185484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:06:38.097335  185484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:06:38.112033  185484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1013 22:06:38.126251  185484 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:06:38.130104  185484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:06:38.143483  185484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:06:38.265765  185484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:06:38.288322  185484 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398 for IP: 192.168.76.2
	I1013 22:06:38.288344  185484 certs.go:195] generating shared ca certs ...
	I1013 22:06:38.288359  185484 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:38.288492  185484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:06:38.288538  185484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:06:38.288549  185484 certs.go:257] generating profile certs ...
	I1013 22:06:38.288601  185484 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.key
	I1013 22:06:38.288615  185484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt with IP's: []
	I1013 22:06:40.000306  185484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt ...
	I1013 22:06:40.000338  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: {Name:mkeeac7154126727aaa3fed8ddd7c6410061a558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.000586  185484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.key ...
	I1013 22:06:40.000601  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.key: {Name:mk8020e2d1d365cc6938cc134265e55a752ba5b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.000716  185484 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21
	I1013 22:06:40.000737  185484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:06:40.862558  185484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21 ...
	I1013 22:06:40.862585  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21: {Name:mkd14ff7084c687b0894fed6c1b3fbde1f74b743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.862766  185484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21 ...
	I1013 22:06:40.862782  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21: {Name:mk4bdeb8b712caf11512d0b8bccb7569786d821e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:40.862874  185484 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt.fe88bb21 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt
	I1013 22:06:40.862964  185484 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key
	I1013 22:06:40.863030  185484 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key
	I1013 22:06:40.863052  185484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt with IP's: []
	I1013 22:06:41.065939  185484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt ...
	I1013 22:06:41.065974  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt: {Name:mkcd829b0fe83fc581e6955c4c4ff1c754801bf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:41.066156  185484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key ...
	I1013 22:06:41.066169  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key: {Name:mk6bc00de88e5c0cd7a912da42edab83af61ee15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:06:41.066360  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:06:41.066403  185484 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:06:41.066417  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:06:41.066443  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:06:41.066468  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:06:41.066492  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:06:41.066538  185484 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:06:41.067102  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:06:41.086893  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:06:41.106422  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:06:41.126126  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:06:41.144517  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:06:41.162338  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:06:41.179642  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:06:41.199333  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:06:41.219143  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:06:41.241761  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:06:41.260608  185484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:06:41.278999  185484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:06:41.292652  185484 ssh_runner.go:195] Run: openssl version
	I1013 22:06:41.301914  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:06:41.311436  185484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:06:41.315315  185484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:06:41.315409  185484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:06:41.359375  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:06:41.368364  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:06:41.379436  185484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:06:41.383746  185484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:06:41.383901  185484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:06:41.426472  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:06:41.435891  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:06:41.447656  185484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:06:41.453728  185484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:06:41.453794  185484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:06:41.504747  185484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:06:41.515434  185484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:06:41.520216  185484 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:06:41.520269  185484 kubeadm.go:400] StartCluster: {Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:06:41.520353  185484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:06:41.520411  185484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:06:41.553697  185484 cri.go:89] found id: ""
	I1013 22:06:41.553784  185484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:06:41.563307  185484 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:06:41.571767  185484 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:06:41.571874  185484 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:06:41.580884  185484 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:06:41.580915  185484 kubeadm.go:157] found existing configuration files:
	
	I1013 22:06:41.580987  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:06:41.589061  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:06:41.589126  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:06:41.597544  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:06:41.605665  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:06:41.605756  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:06:41.613689  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:06:41.622138  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:06:41.622224  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:06:41.630446  185484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:06:41.638774  185484 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:06:41.638884  185484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:06:41.647451  185484 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:06:41.690644  185484 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:06:41.690979  185484 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:06:41.718028  185484 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:06:41.718109  185484 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:06:41.718154  185484 kubeadm.go:318] OS: Linux
	I1013 22:06:41.718205  185484 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:06:41.718260  185484 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:06:41.718314  185484 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:06:41.718369  185484 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:06:41.718422  185484 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:06:41.718476  185484 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:06:41.718527  185484 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:06:41.718580  185484 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:06:41.718632  185484 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:06:41.788499  185484 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:06:41.788647  185484 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:06:41.788772  185484 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:06:41.803997  185484 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:06:41.807390  185484 out.go:252]   - Generating certificates and keys ...
	I1013 22:06:41.807504  185484 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:06:41.807584  185484 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:06:41.967700  185484 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:06:43.250803  185484 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:06:43.610801  185484 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:06:43.891744  185484 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:06:44.008886  185484 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:06:44.009039  185484 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-998398] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:06:44.976794  185484 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:06:44.977292  185484 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-998398] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:06:45.270130  185484 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:06:46.221956  185484 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:06:46.665637  185484 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:06:46.665975  185484 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:06:46.958886  185484 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:06:47.022526  185484 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:06:48.870921  185484 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:06:49.466853  185484 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:06:50.290469  185484 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:06:50.291203  185484 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:06:50.294027  185484 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.865831102Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.876067252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.87624694Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.876337464Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.892076689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.89225801Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.892334225Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.904083347Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.904254887Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.904351927Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.90734267Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:06:40 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:40.90747614Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.573755012Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d1e02636-95b7-407f-9024-cfc580d33ef5 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.574973533Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=32ffacee-d709-429c-b5f1-f480c55d2371 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.576738921Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper" id=90d40e2b-2982-4199-81eb-4c69247d0c1d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.576955227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.590454261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.597710917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.642253982Z" level=info msg="Created container 99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper" id=90d40e2b-2982-4199-81eb-4c69247d0c1d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.64453244Z" level=info msg="Starting container: 99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5" id=213243e0-cf3e-4a53-885b-2dfcb1b84e90 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.64725325Z" level=info msg="Started container" PID=1692 containerID=99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper id=213243e0-cf3e-4a53-885b-2dfcb1b84e90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1
	Oct 13 22:06:44 old-k8s-version-061725 conmon[1690]: conmon 99688e937edfd2a11427 <ninfo>: container 1692 exited with status 1
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.948585129Z" level=info msg="Removing container: 6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f" id=c37b9402-334b-46ef-a170-d8f67c17f0ce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.962994738Z" level=info msg="Error loading conmon cgroup of container 6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f: cgroup deleted" id=c37b9402-334b-46ef-a170-d8f67c17f0ce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:06:44 old-k8s-version-061725 crio[650]: time="2025-10-13T22:06:44.980387574Z" level=info msg="Removed container 6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft/dashboard-metrics-scraper" id=c37b9402-334b-46ef-a170-d8f67c17f0ce name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	99688e937edfd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   2                   b769fa712b70b       dashboard-metrics-scraper-5f989dc9cf-mxmft       kubernetes-dashboard
	f4214d686cc55       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   b6333766e944b       storage-provisioner                              kube-system
	bee272b4edb8b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   7e175e6395e27       kubernetes-dashboard-8694d4445c-6zgml            kubernetes-dashboard
	c92c35e7aab94       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   f031ad9e0a8c0       busybox                                          default
	21645635ce14d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   b6333766e944b       storage-provisioner                              kube-system
	b5cfe60fee50a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   2f775aadc28c9       kube-proxy-kglxn                                 kube-system
	cb2cbcad768d1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   c90b02a034846       kindnet-8j8n7                                    kube-system
	c648129a3253e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   8cebe87299770       coredns-5dd5756b68-6k2fk                         kube-system
	6eed8544403e9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   8e948987a34df       etcd-old-k8s-version-061725                      kube-system
	b8caee63181a7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   ba305c44e4c43       kube-scheduler-old-k8s-version-061725            kube-system
	7b9a569532bd5       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   719ecbfdc9972       kube-apiserver-old-k8s-version-061725            kube-system
	ea02ef13f9182       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   d618c304f120c       kube-controller-manager-old-k8s-version-061725   kube-system
	
	
	==> coredns [c648129a3253eaa1e9d7547c6256957ad5a93b39cb7716180e8208547ea6cdcc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46315 - 52769 "HINFO IN 1256378189370812.3910097882534820810. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.035256721s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-061725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-061725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=old-k8s-version-061725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_04_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-061725
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:06:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:04:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:06:30 +0000   Mon, 13 Oct 2025 22:05:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-061725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b073986d8aa045508ab17637852ae6ea
	  System UUID:                a4ee82dc-aa4f-4d44-9281-73541a0cdcab
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-6k2fk                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-old-k8s-version-061725                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-8j8n7                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-061725             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-061725    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-kglxn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-061725             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-mxmft        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-6zgml             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-061725 event: Registered Node old-k8s-version-061725 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-061725 status is now: NodeReady
	  Normal  Starting                 64s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node old-k8s-version-061725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node old-k8s-version-061725 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                    node-controller  Node old-k8s-version-061725 event: Registered Node old-k8s-version-061725 in Controller
	
	
	==> dmesg <==
	[Oct13 21:36] overlayfs: idmapped layers are currently not supported
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6eed8544403e99d9185f9c6d9e6d28a7fdd3896c087aeec1df5be870a03bbce0] <==
	{"level":"info","ts":"2025-10-13T22:05:52.28818Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-13T22:05:52.288282Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:05:52.288324Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-13T22:05:52.324834Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:05:52.324963Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:05:52.324998Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-13T22:05:52.32928Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-13T22:05:52.337666Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T22:05:52.337699Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-13T22:05:52.337855Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-13T22:05:52.337878Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-13T22:05:53.883817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-13T22:05:53.883938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-13T22:05:53.883987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-13T22:05:53.884025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.884059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.884115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.884148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-13T22:05:53.892049Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-061725 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-13T22:05:53.89215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:05:53.893236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-13T22:05:53.892172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-13T22:05:53.900845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-13T22:05:53.915892Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-13T22:05:53.91599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:06:55 up  1:49,  0 user,  load average: 5.12, 2.65, 2.10
	Linux old-k8s-version-061725 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cb2cbcad768d11f9ca1e964c26de2e6f7d02f101ebb55b59faa4319adff9e6db] <==
	I1013 22:06:00.625809       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:06:00.628622       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:06:00.628768       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:06:00.628781       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:06:00.628796       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:06:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:06:00.860262       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:06:00.860353       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:06:00.860387       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:06:00.861112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:06:30.861322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:06:30.861434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:06:30.861528       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 22:06:30.861623       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1013 22:06:32.262492       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:06:32.262582       1 metrics.go:72] Registering metrics
	I1013 22:06:32.262661       1 controller.go:711] "Syncing nftables rules"
	I1013 22:06:40.864817       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:06:40.865034       1 main.go:301] handling current node
	I1013 22:06:50.864731       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:06:50.864773       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b9a569532bd578665c2febc0e862c8b3dfa6aa451acd0888258fd6f6bd613b9] <==
	I1013 22:05:59.311972       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:05:59.313434       1 trace.go:236] Trace[1902423577]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b0ae7f86-16b3-49c5-b902-4395a954ccba,client:192.168.85.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/old-k8s-version-061725,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:GET (13-Oct-2025 22:05:58.740) (total time: 572ms):
	Trace[1902423577]: ---"About to write a response" 571ms (22:05:59.312)
	Trace[1902423577]: [572.882896ms] [572.882896ms] END
	I1013 22:05:59.468688       1 trace.go:236] Trace[1540345707]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d9b0f51f-7746-476e-ac3e-f467bf48b66d,client:192.168.85.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (13-Oct-2025 22:05:58.776) (total time: 692ms):
	Trace[1540345707]: ---"limitedReadBody succeeded" len:4139 26ms (22:05:58.802)
	Trace[1540345707]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-061725" already exists 181ms (22:05:59.465)
	Trace[1540345707]: [692.414873ms] [692.414873ms] END
	I1013 22:05:59.493612       1 trace.go:236] Trace[2076288680]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9e115732-69d3-4410-884f-418fbea1955a,client:192.168.85.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (13-Oct-2025 22:05:58.715) (total time: 777ms):
	Trace[2076288680]: [777.672185ms] [777.672185ms] END
	I1013 22:05:59.567403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:05:59.734887       1 trace.go:236] Trace[466923378]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9a8e6494-efb9-4c64-aed2-1d7312304c7e,client:192.168.85.2,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (13-Oct-2025 22:05:59.201) (total time: 533ms):
	Trace[466923378]: ---"Write to database call failed" len:2175,err:pods "etcd-old-k8s-version-061725" already exists 113ms (22:05:59.734)
	Trace[466923378]: [533.490573ms] [533.490573ms] END
	E1013 22:05:59.766501       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:06:03.421164       1 controller.go:624] quota admission added evaluator for: namespaces
	I1013 22:06:03.467064       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1013 22:06:03.498670       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:06:03.516575       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:06:03.535594       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1013 22:06:03.622461       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.132.29"}
	I1013 22:06:03.695999       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.41.254"}
	I1013 22:06:13.304351       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:06:13.322744       1 controller.go:624] quota admission added evaluator for: endpoints
	I1013 22:06:13.442821       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ea02ef13f9182b9021218457ea3dab09ac4d242a483e773331138defe8ef3896] <==
	I1013 22:06:13.417748       1 shared_informer.go:318] Caches are synced for resource quota
	I1013 22:06:13.453822       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1013 22:06:13.453850       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1013 22:06:13.469979       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-mxmft"
	I1013 22:06:13.471321       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-6zgml"
	I1013 22:06:13.479930       1 shared_informer.go:318] Caches are synced for HPA
	I1013 22:06:13.493284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.498008ms"
	I1013 22:06:13.504687       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.455946ms"
	I1013 22:06:13.525431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.149823ms"
	I1013 22:06:13.525574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.037µs"
	I1013 22:06:13.531201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.503µs"
	I1013 22:06:13.540850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.404043ms"
	I1013 22:06:13.540939       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.309µs"
	I1013 22:06:13.545055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.143µs"
	I1013 22:06:13.858823       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:06:13.895882       1 shared_informer.go:318] Caches are synced for garbage collector
	I1013 22:06:13.895912       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1013 22:06:19.898974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.143347ms"
	I1013 22:06:19.899978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.194µs"
	I1013 22:06:26.912348       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.137µs"
	I1013 22:06:27.925411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.168µs"
	I1013 22:06:28.920033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.948µs"
	I1013 22:06:34.451771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.985455ms"
	I1013 22:06:34.453460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.66µs"
	I1013 22:06:44.967596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.742µs"
	
	
	==> kube-proxy [b5cfe60fee50aca9adeb9a7210f96baa84cf5ff86310fb648ab48513f8990dd9] <==
	I1013 22:06:01.762315       1 server_others.go:69] "Using iptables proxy"
	I1013 22:06:01.880253       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1013 22:06:02.046556       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:06:02.070225       1 server_others.go:152] "Using iptables Proxier"
	I1013 22:06:02.070266       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1013 22:06:02.070274       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1013 22:06:02.070307       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1013 22:06:02.070525       1 server.go:846] "Version info" version="v1.28.0"
	I1013 22:06:02.070535       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:06:02.071471       1 config.go:188] "Starting service config controller"
	I1013 22:06:02.071504       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1013 22:06:02.071524       1 config.go:97] "Starting endpoint slice config controller"
	I1013 22:06:02.071527       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1013 22:06:02.072028       1 config.go:315] "Starting node config controller"
	I1013 22:06:02.072035       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1013 22:06:02.172087       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1013 22:06:02.172144       1 shared_informer.go:318] Caches are synced for service config
	I1013 22:06:02.176472       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8caee63181a735c804ef9eb3da1040c9ec20a7c106dec4be1a1e2979c1008be] <==
	I1013 22:05:55.030232       1 serving.go:348] Generated self-signed cert in-memory
	W1013 22:05:58.840195       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:05:58.840310       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 22:05:58.840343       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:05:58.840384       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:05:59.461817       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1013 22:05:59.461922       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:05:59.463591       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:05:59.463673       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1013 22:05:59.476455       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1013 22:05:59.476552       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1013 22:05:59.565538       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.495311     774 topology_manager.go:215] "Topology Admit Handler" podUID="085d0596-5060-49cb-ada7-51da9c251ab8" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-mxmft"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.585515     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b62057b0-535c-46d1-87a0-f7e573c4b455-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-6zgml\" (UID: \"b62057b0-535c-46d1-87a0-f7e573c4b455\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6zgml"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.585809     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxh48\" (UniqueName: \"kubernetes.io/projected/b62057b0-535c-46d1-87a0-f7e573c4b455-kube-api-access-mxh48\") pod \"kubernetes-dashboard-8694d4445c-6zgml\" (UID: \"b62057b0-535c-46d1-87a0-f7e573c4b455\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6zgml"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.686044     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkr45\" (UniqueName: \"kubernetes.io/projected/085d0596-5060-49cb-ada7-51da9c251ab8-kube-api-access-wkr45\") pod \"dashboard-metrics-scraper-5f989dc9cf-mxmft\" (UID: \"085d0596-5060-49cb-ada7-51da9c251ab8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: I1013 22:06:13.686136     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/085d0596-5060-49cb-ada7-51da9c251ab8-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-mxmft\" (UID: \"085d0596-5060-49cb-ada7-51da9c251ab8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft"
	Oct 13 22:06:13 old-k8s-version-061725 kubelet[774]: W1013 22:06:13.817096     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/crio-7e175e6395e27db44d204131ed09021ed6b4ceac32fc6aba9febf56309f9382c WatchSource:0}: Error finding container 7e175e6395e27db44d204131ed09021ed6b4ceac32fc6aba9febf56309f9382c: Status 404 returned error can't find the container with id 7e175e6395e27db44d204131ed09021ed6b4ceac32fc6aba9febf56309f9382c
	Oct 13 22:06:14 old-k8s-version-061725 kubelet[774]: W1013 22:06:14.113848     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9b67329f891f8d4a601d1da44c24f1f70a816b6b9dd1271a7207bc7ac21cc041/crio-b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1 WatchSource:0}: Error finding container b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1: Status 404 returned error can't find the container with id b769fa712b70bcb0b5986a573ff0dda93e95ed8bd324858d35acc6dca52c32e1
	Oct 13 22:06:26 old-k8s-version-061725 kubelet[774]: I1013 22:06:26.883892     774 scope.go:117] "RemoveContainer" containerID="e86545af98ecb6b492019efe9708d42d08126c3d1e8495cc8b74fa443e3ccee3"
	Oct 13 22:06:26 old-k8s-version-061725 kubelet[774]: I1013 22:06:26.908821     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6zgml" podStartSLOduration=8.02134172 podCreationTimestamp="2025-10-13 22:06:13 +0000 UTC" firstStartedPulling="2025-10-13 22:06:13.823335707 +0000 UTC m=+22.386609854" lastFinishedPulling="2025-10-13 22:06:19.710753 +0000 UTC m=+28.274027122" observedRunningTime="2025-10-13 22:06:19.8749169 +0000 UTC m=+28.438191021" watchObservedRunningTime="2025-10-13 22:06:26.908758988 +0000 UTC m=+35.472033118"
	Oct 13 22:06:27 old-k8s-version-061725 kubelet[774]: I1013 22:06:27.888261     774 scope.go:117] "RemoveContainer" containerID="e86545af98ecb6b492019efe9708d42d08126c3d1e8495cc8b74fa443e3ccee3"
	Oct 13 22:06:27 old-k8s-version-061725 kubelet[774]: I1013 22:06:27.888613     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:27 old-k8s-version-061725 kubelet[774]: E1013 22:06:27.889248     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:28 old-k8s-version-061725 kubelet[774]: I1013 22:06:28.892608     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:28 old-k8s-version-061725 kubelet[774]: E1013 22:06:28.893433     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:31 old-k8s-version-061725 kubelet[774]: I1013 22:06:31.902245     774 scope.go:117] "RemoveContainer" containerID="21645635ce14d06941230e4e6235b9280959f93614831d10abae4cb0b70f1236"
	Oct 13 22:06:34 old-k8s-version-061725 kubelet[774]: I1013 22:06:34.099077     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:34 old-k8s-version-061725 kubelet[774]: E1013 22:06:34.100020     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: I1013 22:06:44.573092     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: I1013 22:06:44.936447     774 scope.go:117] "RemoveContainer" containerID="6bee09c359a7fa70d52353a1fe3a59fa047bab38d7f5ed6eca9f8e28a9080b4f"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: I1013 22:06:44.940603     774 scope.go:117] "RemoveContainer" containerID="99688e937edfd2a11427cea137c4ab16f0c12b9ff59808610701730ba426a9b5"
	Oct 13 22:06:44 old-k8s-version-061725 kubelet[774]: E1013 22:06:44.941106     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mxmft_kubernetes-dashboard(085d0596-5060-49cb-ada7-51da9c251ab8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mxmft" podUID="085d0596-5060-49cb-ada7-51da9c251ab8"
	Oct 13 22:06:49 old-k8s-version-061725 kubelet[774]: I1013 22:06:49.302824     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 13 22:06:49 old-k8s-version-061725 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:06:49 old-k8s-version-061725 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:06:49 old-k8s-version-061725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [bee272b4edb8bfa59232efb28167b53a742045a704a78d5cb04dab0c16c607ad] <==
	2025/10/13 22:06:19 Using namespace: kubernetes-dashboard
	2025/10/13 22:06:19 Using in-cluster config to connect to apiserver
	2025/10/13 22:06:19 Using secret token for csrf signing
	2025/10/13 22:06:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:06:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:06:19 Successful initial request to the apiserver, version: v1.28.0
	2025/10/13 22:06:19 Generating JWE encryption key
	2025/10/13 22:06:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:06:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:06:23 Initializing JWE encryption key from synchronized object
	2025/10/13 22:06:23 Creating in-cluster Sidecar client
	2025/10/13 22:06:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:06:23 Serving insecurely on HTTP port: 9090
	2025/10/13 22:06:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:06:19 Starting overwatch
	
	
	==> storage-provisioner [21645635ce14d06941230e4e6235b9280959f93614831d10abae4cb0b70f1236] <==
	I1013 22:06:00.969101       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:06:30.970777       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f4214d686cc551d918ce7ab3ebc086aed6b9ef041c9d4f95ae3e52094f9f8fe4] <==
	I1013 22:06:32.044032       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:06:32.068933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:06:32.069057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1013 22:06:49.568204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:06:49.568432       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061725_db14fc1b-5dcb-46f3-a6b6-40d90b3aa07d!
	I1013 22:06:49.578367       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7559b29-90e7-44b0-9ce8-e3c256861aa5", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-061725_db14fc1b-5dcb-46f3-a6b6-40d90b3aa07d became leader
	I1013 22:06:49.669531       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061725_db14fc1b-5dcb-46f3-a6b6-40d90b3aa07d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-061725 -n old-k8s-version-061725
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-061725 -n old-k8s-version-061725: exit status 2 (488.698666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-061725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (277.053831ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:07:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-998398 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-998398 describe deploy/metrics-server -n kube-system: exit status 1 (103.352973ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-998398 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-998398
helpers_test.go:243: (dbg) docker inspect no-preload-998398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c",
	        "Created": "2025-10-13T22:06:10.888076989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185792,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:06:10.965217562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/hosts",
	        "LogPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c-json.log",
	        "Name": "/no-preload-998398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-998398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-998398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c",
	                "LowerDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/merged",
	                "UpperDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/diff",
	                "WorkDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-998398",
	                "Source": "/var/lib/docker/volumes/no-preload-998398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-998398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-998398",
	                "name.minikube.sigs.k8s.io": "no-preload-998398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "caa09af413805bcac0da025970e9fd377772f738969e59c049e878c72f360296",
	            "SandboxKey": "/var/run/docker/netns/caa09af41380",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-998398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:95:31:1f:5a:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "833f6629e3a8d48e88017e58115925d444d24da96413e70671b51381906ca938",
	                    "EndpointID": "1a53394c47af03207a384423684a2288410fae3b964b5d73133cdfec0a295201",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-998398",
	                        "6fb16f37ec05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-998398 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-998398 logs -n 25: (1.324024407s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-122822 sudo crio config                                                                                                                                                                                                             │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ delete  │ -p cilium-122822                                                                                                                                                                                                                              │ cilium-122822             │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │ 13 Oct 25 21:55 UTC │
	│ start   │ -p force-systemd-env-312094 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 21:55 UTC │                     │
	│ ssh     │ force-systemd-flag-257205 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-flag-257205                                                                                                                                                                                                                  │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-env-312094                                                                                                                                                                                                                   │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-options-194931 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ cert-options-194931 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p cert-options-194931 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-options-194931                                                                                                                                                                                                                        │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ stop    │ -p old-k8s-version-061725 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-061725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758        │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:07:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:07:00.801802  189874 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:07:00.801986  189874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:07:00.802013  189874 out.go:374] Setting ErrFile to fd 2...
	I1013 22:07:00.802032  189874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:07:00.802336  189874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:07:00.802792  189874 out.go:368] Setting JSON to false
	I1013 22:07:00.805016  189874 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6555,"bootTime":1760386666,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:07:00.805111  189874 start.go:141] virtualization:  
	I1013 22:07:00.810902  189874 out.go:179] * [embed-certs-251758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:07:00.813966  189874 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:07:00.814165  189874 notify.go:220] Checking for updates...
	I1013 22:07:00.820237  189874 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:07:00.823086  189874 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:07:00.826008  189874 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:07:00.828903  189874 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:07:00.831815  189874 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:07:00.835198  189874 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:07:00.835292  189874 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:07:00.876650  189874 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:07:00.876804  189874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:07:00.978968  189874 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:07:00.967390376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:07:00.979083  189874 docker.go:318] overlay module found
	I1013 22:07:00.982279  189874 out.go:179] * Using the docker driver based on user configuration
	I1013 22:07:00.985097  189874 start.go:305] selected driver: docker
	I1013 22:07:00.985122  189874 start.go:925] validating driver "docker" against <nil>
	I1013 22:07:00.985143  189874 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:07:00.985881  189874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:07:01.100581  189874 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:07:01.091422312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:07:01.100746  189874 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:07:01.100963  189874 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:07:01.104258  189874 out.go:179] * Using Docker driver with root privileges
	I1013 22:07:01.107054  189874 cni.go:84] Creating CNI manager for ""
	I1013 22:07:01.107119  189874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:07:01.107127  189874 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:07:01.107195  189874 start.go:349] cluster config:
	{Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:07:01.110269  189874 out.go:179] * Starting "embed-certs-251758" primary control-plane node in "embed-certs-251758" cluster
	I1013 22:07:01.113090  189874 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:07:01.116057  189874 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:07:01.118950  189874 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:07:01.119005  189874 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:07:01.119014  189874 cache.go:58] Caching tarball of preloaded images
	I1013 22:07:01.119095  189874 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:07:01.119105  189874 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:07:01.119219  189874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/config.json ...
	I1013 22:07:01.119236  189874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/config.json: {Name:mk9af2a56cac2b5904ed81bd5afffc31491f01d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:01.119390  189874 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:07:01.139806  189874 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:07:01.139833  189874 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:07:01.139854  189874 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:07:01.139878  189874 start.go:360] acquireMachinesLock for embed-certs-251758: {Name:mk516ca80db4149cf875ca7692ac1e5faffe2cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:01.139982  189874 start.go:364] duration metric: took 85.561µs to acquireMachinesLock for "embed-certs-251758"
	I1013 22:07:01.140014  189874 start.go:93] Provisioning new machine with config: &{Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:07:01.140087  189874 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:07:00.217283  185484 out.go:252]   - Configuring RBAC rules ...
	I1013 22:07:00.217448  185484 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:07:00.262530  185484 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:07:00.310954  185484 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:07:00.320132  185484 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:07:00.330558  185484 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:07:00.345947  185484 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:07:00.403464  185484 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:07:00.871551  185484 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:07:01.395212  185484 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:07:01.396403  185484 kubeadm.go:318] 
	I1013 22:07:01.396559  185484 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:07:01.396577  185484 kubeadm.go:318] 
	I1013 22:07:01.397185  185484 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:07:01.397595  185484 kubeadm.go:318] 
	I1013 22:07:01.398248  185484 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:07:01.398632  185484 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:07:01.398869  185484 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:07:01.398883  185484 kubeadm.go:318] 
	I1013 22:07:01.399394  185484 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:07:01.399427  185484 kubeadm.go:318] 
	I1013 22:07:01.399484  185484 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:07:01.399493  185484 kubeadm.go:318] 
	I1013 22:07:01.399552  185484 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:07:01.400062  185484 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:07:01.400144  185484 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:07:01.400161  185484 kubeadm.go:318] 
	I1013 22:07:01.400283  185484 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:07:01.401204  185484 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:07:01.401216  185484 kubeadm.go:318] 
	I1013 22:07:01.401317  185484 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 11f6d5.410hrw908t37tslp \
	I1013 22:07:01.401432  185484 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:07:01.401465  185484 kubeadm.go:318] 	--control-plane 
	I1013 22:07:01.401478  185484 kubeadm.go:318] 
	I1013 22:07:01.401574  185484 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:07:01.401582  185484 kubeadm.go:318] 
	I1013 22:07:01.401688  185484 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 11f6d5.410hrw908t37tslp \
	I1013 22:07:01.401816  185484 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:07:01.405907  185484 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:07:01.406151  185484 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:07:01.406264  185484 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:07:01.406281  185484 cni.go:84] Creating CNI manager for ""
	I1013 22:07:01.406289  185484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:07:01.412376  185484 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:07:01.415752  185484 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:07:01.421129  185484 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:07:01.421151  185484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:07:01.437231  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:07:01.849355  185484 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:07:01.849517  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:01.849635  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-998398 minikube.k8s.io/updated_at=2025_10_13T22_07_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=no-preload-998398 minikube.k8s.io/primary=true
	I1013 22:07:02.217012  185484 ops.go:34] apiserver oom_adj: -16
	I1013 22:07:02.254132  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:02.754244  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:03.255024  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:03.754907  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:04.254238  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:04.754693  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:01.143494  189874 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:07:01.143745  189874 start.go:159] libmachine.API.Create for "embed-certs-251758" (driver="docker")
	I1013 22:07:01.143845  189874 client.go:168] LocalClient.Create starting
	I1013 22:07:01.143940  189874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:07:01.143981  189874 main.go:141] libmachine: Decoding PEM data...
	I1013 22:07:01.143998  189874 main.go:141] libmachine: Parsing certificate...
	I1013 22:07:01.144059  189874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:07:01.144082  189874 main.go:141] libmachine: Decoding PEM data...
	I1013 22:07:01.144096  189874 main.go:141] libmachine: Parsing certificate...
	I1013 22:07:01.144475  189874 cli_runner.go:164] Run: docker network inspect embed-certs-251758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:07:01.163195  189874 cli_runner.go:211] docker network inspect embed-certs-251758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:07:01.163272  189874 network_create.go:284] running [docker network inspect embed-certs-251758] to gather additional debugging logs...
	I1013 22:07:01.163298  189874 cli_runner.go:164] Run: docker network inspect embed-certs-251758
	W1013 22:07:01.183309  189874 cli_runner.go:211] docker network inspect embed-certs-251758 returned with exit code 1
	I1013 22:07:01.183338  189874 network_create.go:287] error running [docker network inspect embed-certs-251758]: docker network inspect embed-certs-251758: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-251758 not found
	I1013 22:07:01.183378  189874 network_create.go:289] output of [docker network inspect embed-certs-251758]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-251758 not found
	
	** /stderr **
	I1013 22:07:01.183493  189874 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:07:01.204317  189874 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:07:01.204643  189874 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:07:01.204954  189874 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:07:01.205205  189874 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-833f6629e3a8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:5d:aa:fd:88:ca} reservation:<nil>}
	I1013 22:07:01.205600  189874 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f0fc0}
	I1013 22:07:01.205623  189874 network_create.go:124] attempt to create docker network embed-certs-251758 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:07:01.205680  189874 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-251758 embed-certs-251758
	I1013 22:07:01.287219  189874 network_create.go:108] docker network embed-certs-251758 192.168.85.0/24 created
	I1013 22:07:01.287265  189874 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-251758" container
	I1013 22:07:01.287350  189874 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:07:01.305917  189874 cli_runner.go:164] Run: docker volume create embed-certs-251758 --label name.minikube.sigs.k8s.io=embed-certs-251758 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:07:01.326023  189874 oci.go:103] Successfully created a docker volume embed-certs-251758
	I1013 22:07:01.326120  189874 cli_runner.go:164] Run: docker run --rm --name embed-certs-251758-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-251758 --entrypoint /usr/bin/test -v embed-certs-251758:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:07:01.963067  189874 oci.go:107] Successfully prepared a docker volume embed-certs-251758
	I1013 22:07:01.963109  189874 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:07:01.963128  189874 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:07:01.963193  189874 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-251758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1013 22:07:05.254428  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:05.754163  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:06.254230  185484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:07:06.392972  185484 kubeadm.go:1113] duration metric: took 4.543516857s to wait for elevateKubeSystemPrivileges
	I1013 22:07:06.393002  185484 kubeadm.go:402] duration metric: took 24.872736674s to StartCluster
	I1013 22:07:06.393019  185484 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:06.393079  185484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:07:06.393760  185484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:06.401903  185484 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:07:06.402016  185484 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:07:06.402309  185484 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:07:06.402347  185484 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:07:06.402476  185484 addons.go:69] Setting default-storageclass=true in profile "no-preload-998398"
	I1013 22:07:06.402491  185484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-998398"
	I1013 22:07:06.402812  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:06.408781  185484 addons.go:69] Setting storage-provisioner=true in profile "no-preload-998398"
	I1013 22:07:06.408823  185484 addons.go:238] Setting addon storage-provisioner=true in "no-preload-998398"
	I1013 22:07:06.408855  185484 host.go:66] Checking if "no-preload-998398" exists ...
	I1013 22:07:06.409325  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:06.420567  185484 out.go:179] * Verifying Kubernetes components...
	I1013 22:07:06.451886  185484 addons.go:238] Setting addon default-storageclass=true in "no-preload-998398"
	I1013 22:07:06.451924  185484 host.go:66] Checking if "no-preload-998398" exists ...
	I1013 22:07:06.452378  185484 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:06.474521  185484 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:07:06.474542  185484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:07:06.474606  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:06.491980  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:06.492863  185484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:07:06.492953  185484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:07:06.558266  185484 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:07:06.558293  185484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:07:06.558376  185484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:06.584731  185484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:06.600321  185484 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:07:06.769623  185484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:07:06.818147  185484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:07:06.821777  185484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:07:07.398880  185484 node_ready.go:35] waiting up to 6m0s for node "no-preload-998398" to be "Ready" ...
	I1013 22:07:07.400274  185484 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 22:07:07.921578  185484 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-998398" context rescaled to 1 replicas
	I1013 22:07:08.063502  185484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.241647226s)
	I1013 22:07:08.066684  185484 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1013 22:07:08.070374  185484 addons.go:514] duration metric: took 1.668005956s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1013 22:07:09.402320  185484 node_ready.go:57] node "no-preload-998398" has "Ready":"False" status (will retry)
	I1013 22:07:07.133423  189874 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-251758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.170183261s)
	I1013 22:07:07.133465  189874 kic.go:203] duration metric: took 5.170334076s to extract preloaded images to volume ...
	W1013 22:07:07.133591  189874 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:07:07.133710  189874 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:07:07.256404  189874 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-251758 --name embed-certs-251758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-251758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-251758 --network embed-certs-251758 --ip 192.168.85.2 --volume embed-certs-251758:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:07:07.674052  189874 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Running}}
	I1013 22:07:07.713096  189874 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:07:07.746941  189874 cli_runner.go:164] Run: docker exec embed-certs-251758 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:07:07.809260  189874 oci.go:144] the created container "embed-certs-251758" has a running status.
	I1013 22:07:07.809303  189874 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa...
	I1013 22:07:08.454064  189874 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:07:08.481614  189874 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:07:08.515360  189874 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:07:08.515379  189874 kic_runner.go:114] Args: [docker exec --privileged embed-certs-251758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:07:08.627620  189874 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:07:08.653633  189874 machine.go:93] provisionDockerMachine start ...
	I1013 22:07:08.653713  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:08.676954  189874 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:08.677388  189874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1013 22:07:08.677402  189874 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:07:08.678115  189874 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:07:11.823177  189874 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-251758
	
	I1013 22:07:11.823199  189874 ubuntu.go:182] provisioning hostname "embed-certs-251758"
	I1013 22:07:11.823265  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:11.839925  189874 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:11.840240  189874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1013 22:07:11.840256  189874 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-251758 && echo "embed-certs-251758" | sudo tee /etc/hostname
	I1013 22:07:12.001499  189874 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-251758
	
	I1013 22:07:12.001585  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:12.022807  189874 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:12.023113  189874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1013 22:07:12.023133  189874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-251758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-251758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-251758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:07:12.172287  189874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:07:12.172311  189874 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:07:12.172340  189874 ubuntu.go:190] setting up certificates
	I1013 22:07:12.172351  189874 provision.go:84] configureAuth start
	I1013 22:07:12.172408  189874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:07:12.191935  189874 provision.go:143] copyHostCerts
	I1013 22:07:12.192002  189874 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:07:12.192017  189874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:07:12.192096  189874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:07:12.192189  189874 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:07:12.192200  189874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:07:12.192233  189874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:07:12.192318  189874 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:07:12.192327  189874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:07:12.192353  189874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:07:12.192405  189874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.embed-certs-251758 san=[127.0.0.1 192.168.85.2 embed-certs-251758 localhost minikube]
	I1013 22:07:12.446665  189874 provision.go:177] copyRemoteCerts
	I1013 22:07:12.446746  189874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:07:12.446792  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:12.463458  189874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:07:12.567955  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:07:12.586545  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1013 22:07:12.604982  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:07:12.622230  189874 provision.go:87] duration metric: took 449.857434ms to configureAuth
	I1013 22:07:12.622299  189874 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:07:12.622495  189874 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:07:12.622600  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:12.639116  189874 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:12.639423  189874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1013 22:07:12.639438  189874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:07:12.976597  189874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:07:12.976622  189874 machine.go:96] duration metric: took 4.322973581s to provisionDockerMachine
	I1013 22:07:12.976632  189874 client.go:171] duration metric: took 11.832773453s to LocalClient.Create
	I1013 22:07:12.976661  189874 start.go:167] duration metric: took 11.832907612s to libmachine.API.Create "embed-certs-251758"
	I1013 22:07:12.976672  189874 start.go:293] postStartSetup for "embed-certs-251758" (driver="docker")
	I1013 22:07:12.976682  189874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:07:12.976753  189874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:07:12.976810  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:12.995675  189874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:07:13.099528  189874 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:07:13.102528  189874 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:07:13.102558  189874 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:07:13.102569  189874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:07:13.102622  189874 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:07:13.102711  189874 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:07:13.102856  189874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:07:13.109922  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:07:13.137500  189874 start.go:296] duration metric: took 160.8135ms for postStartSetup
	I1013 22:07:13.137861  189874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:07:13.153861  189874 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/config.json ...
	I1013 22:07:13.154136  189874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:07:13.154185  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:13.171153  189874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:07:13.272541  189874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:07:13.277142  189874 start.go:128] duration metric: took 12.137041102s to createHost
	I1013 22:07:13.277165  189874 start.go:83] releasing machines lock for "embed-certs-251758", held for 12.137169058s
	I1013 22:07:13.277244  189874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:07:13.293261  189874 ssh_runner.go:195] Run: cat /version.json
	I1013 22:07:13.293320  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:13.294298  189874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:07:13.294367  189874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:07:13.316959  189874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:07:13.316959  189874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:07:13.520093  189874 ssh_runner.go:195] Run: systemctl --version
	I1013 22:07:13.526237  189874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:07:13.561268  189874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:07:13.565394  189874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:07:13.565504  189874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:07:13.595694  189874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:07:13.595727  189874 start.go:495] detecting cgroup driver to use...
	I1013 22:07:13.595759  189874 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:07:13.595836  189874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:07:13.614949  189874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:07:13.628341  189874 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:07:13.628401  189874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:07:13.645390  189874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:07:13.663484  189874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:07:13.779874  189874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:07:13.901984  189874 docker.go:234] disabling docker service ...
	I1013 22:07:13.902092  189874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:07:13.925098  189874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:07:13.938101  189874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:07:14.071002  189874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:07:14.205303  189874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:07:14.217986  189874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:07:14.234192  189874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:07:14.234253  189874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:14.243046  189874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:07:14.243154  189874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:14.252549  189874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:14.261370  189874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:14.269807  189874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:07:14.278109  189874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:14.286771  189874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:14.299879  189874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:14.308452  189874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:07:14.317073  189874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:07:14.324439  189874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:07:14.436159  189874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:07:14.592598  189874 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:07:14.592732  189874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:07:14.596867  189874 start.go:563] Will wait 60s for crictl version
	I1013 22:07:14.596973  189874 ssh_runner.go:195] Run: which crictl
	I1013 22:07:14.600456  189874 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:07:14.628240  189874 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:07:14.628396  189874 ssh_runner.go:195] Run: crio --version
	I1013 22:07:14.659113  189874 ssh_runner.go:195] Run: crio --version
	I1013 22:07:14.691009  189874 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1013 22:07:11.402596  185484 node_ready.go:57] node "no-preload-998398" has "Ready":"False" status (will retry)
	W1013 22:07:13.903309  185484 node_ready.go:57] node "no-preload-998398" has "Ready":"False" status (will retry)
	I1013 22:07:14.693952  189874 cli_runner.go:164] Run: docker network inspect embed-certs-251758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:07:14.709759  189874 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:07:14.713615  189874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:07:14.722761  189874 kubeadm.go:883] updating cluster {Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:07:14.722871  189874 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:07:14.722938  189874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:07:14.754725  189874 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:07:14.754750  189874 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:07:14.754803  189874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:07:14.780530  189874 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:07:14.780552  189874 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:07:14.780560  189874 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:07:14.780658  189874 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-251758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:07:14.780744  189874 ssh_runner.go:195] Run: crio config
	I1013 22:07:14.843998  189874 cni.go:84] Creating CNI manager for ""
	I1013 22:07:14.844023  189874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:07:14.844043  189874 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:07:14.844066  189874 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-251758 NodeName:embed-certs-251758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:07:14.844195  189874 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-251758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:07:14.844299  189874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:07:14.851678  189874 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:07:14.851763  189874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:07:14.859192  189874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1013 22:07:14.871583  189874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:07:14.885342  189874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 22:07:14.898658  189874 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:07:14.905241  189874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:07:14.915501  189874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:07:15.047887  189874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:07:15.068097  189874 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758 for IP: 192.168.85.2
	I1013 22:07:15.068169  189874 certs.go:195] generating shared ca certs ...
	I1013 22:07:15.068200  189874 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:15.068395  189874 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:07:15.068485  189874 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:07:15.068517  189874 certs.go:257] generating profile certs ...
	I1013 22:07:15.068602  189874 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/client.key
	I1013 22:07:15.068663  189874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/client.crt with IP's: []
	W1013 22:07:16.402556  185484 node_ready.go:57] node "no-preload-998398" has "Ready":"False" status (will retry)
	W1013 22:07:18.902859  185484 node_ready.go:57] node "no-preload-998398" has "Ready":"False" status (will retry)
	I1013 22:07:15.848739  189874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/client.crt ...
	I1013 22:07:15.848771  189874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/client.crt: {Name:mk830add153ffb6fcf9cb2efb6ce1d86e82ccd09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:15.848975  189874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/client.key ...
	I1013 22:07:15.848992  189874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/client.key: {Name:mk145797e5947b3ee45d26827d49160ece0dd901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:15.849124  189874 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key.3c24f2a0
	I1013 22:07:15.849140  189874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.crt.3c24f2a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:07:16.262243  189874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.crt.3c24f2a0 ...
	I1013 22:07:16.262275  189874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.crt.3c24f2a0: {Name:mkee356612f8d08b88b785411f8402347b9b6a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:16.262466  189874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key.3c24f2a0 ...
	I1013 22:07:16.262480  189874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key.3c24f2a0: {Name:mk2b14d81bf687305eb8b02f3a37fe724fb4e079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:16.262564  189874 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.crt.3c24f2a0 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.crt
	I1013 22:07:16.262645  189874 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key.3c24f2a0 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key
	I1013 22:07:16.262707  189874 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.key
	I1013 22:07:16.262726  189874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.crt with IP's: []
	I1013 22:07:16.379306  189874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.crt ...
	I1013 22:07:16.379334  189874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.crt: {Name:mkd16f8f00019b7ded311e30f30939f2d5f230d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:16.379495  189874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.key ...
	I1013 22:07:16.379509  189874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.key: {Name:mk6e5ee1d581b4e33927aece6081465df47c94a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:16.379687  189874 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:07:16.379729  189874 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:07:16.379743  189874 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:07:16.379768  189874 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:07:16.379813  189874 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:07:16.379839  189874 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:07:16.379885  189874 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:07:16.380456  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:07:16.400190  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:07:16.418177  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:07:16.435939  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:07:16.452971  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 22:07:16.471524  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:07:16.489611  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:07:16.508088  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:07:16.525104  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:07:16.542623  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:07:16.559495  189874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:07:16.577808  189874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:07:16.591169  189874 ssh_runner.go:195] Run: openssl version
	I1013 22:07:16.597851  189874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:07:16.605842  189874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:07:16.609450  189874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:07:16.609552  189874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:07:16.650711  189874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:07:16.660095  189874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:07:16.668995  189874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:07:16.673023  189874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:07:16.673123  189874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:07:16.720884  189874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:07:16.729511  189874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:07:16.741615  189874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:07:16.745567  189874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:07:16.745636  189874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:07:16.787771  189874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:07:16.795769  189874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:07:16.799102  189874 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:07:16.799185  189874 kubeadm.go:400] StartCluster: {Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:07:16.799290  189874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:07:16.799352  189874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:07:16.826689  189874 cri.go:89] found id: ""
	I1013 22:07:16.826833  189874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:07:16.834839  189874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:07:16.842195  189874 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:07:16.842287  189874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:07:16.850266  189874 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:07:16.850285  189874 kubeadm.go:157] found existing configuration files:
	
	I1013 22:07:16.850335  189874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:07:16.857707  189874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:07:16.857773  189874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:07:16.867882  189874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:07:16.876686  189874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:07:16.876798  189874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:07:16.884388  189874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:07:16.892449  189874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:07:16.892518  189874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:07:16.901191  189874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:07:16.909936  189874 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:07:16.909995  189874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:07:16.917376  189874 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:07:16.988249  189874 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:07:16.988563  189874 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:07:17.068517  189874 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:07:21.410529  185484 node_ready.go:49] node "no-preload-998398" is "Ready"
	I1013 22:07:21.410563  185484 node_ready.go:38] duration metric: took 14.01165627s for node "no-preload-998398" to be "Ready" ...
	I1013 22:07:21.410576  185484 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:07:21.410634  185484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:07:21.434202  185484 api_server.go:72] duration metric: took 15.032246221s to wait for apiserver process to appear ...
	I1013 22:07:21.434228  185484 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:07:21.434248  185484 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:07:21.454704  185484 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 22:07:21.455794  185484 api_server.go:141] control plane version: v1.34.1
	I1013 22:07:21.455814  185484 api_server.go:131] duration metric: took 21.578527ms to wait for apiserver health ...
	I1013 22:07:21.455824  185484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:07:21.473746  185484 system_pods.go:59] 8 kube-system pods found
	I1013 22:07:21.473780  185484 system_pods.go:61] "coredns-66bc5c9577-7vlmn" [edd4eb6c-ff17-43de-a57d-d119a7cad435] Pending
	I1013 22:07:21.473786  185484 system_pods.go:61] "etcd-no-preload-998398" [b8d4d15c-e804-4230-92e8-8f587ee86dbe] Running
	I1013 22:07:21.473791  185484 system_pods.go:61] "kindnet-6nvxb" [5e372ae0-66e2-4ba1-a61a-de71523b139d] Running
	I1013 22:07:21.473796  185484 system_pods.go:61] "kube-apiserver-no-preload-998398" [a614fcdc-6540-451c-93c3-b9ecb1b4e09f] Running
	I1013 22:07:21.473800  185484 system_pods.go:61] "kube-controller-manager-no-preload-998398" [08d7347e-65c5-4912-982b-1f47cecac69f] Running
	I1013 22:07:21.473805  185484 system_pods.go:61] "kube-proxy-7zmxr" [e943a88c-1969-4fb7-bbe9-03f3a93e5d6d] Running
	I1013 22:07:21.473809  185484 system_pods.go:61] "kube-scheduler-no-preload-998398" [06161d4d-6e4b-4998-ab23-07b72d2c2d2f] Running
	I1013 22:07:21.473814  185484 system_pods.go:61] "storage-provisioner" [c073142f-fc41-4606-802d-105fcab5d408] Pending
	I1013 22:07:21.473819  185484 system_pods.go:74] duration metric: took 17.989243ms to wait for pod list to return data ...
	I1013 22:07:21.473826  185484 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:07:21.479260  185484 default_sa.go:45] found service account: "default"
	I1013 22:07:21.479348  185484 default_sa.go:55] duration metric: took 5.514446ms for default service account to be created ...
	I1013 22:07:21.479372  185484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:07:21.506658  185484 system_pods.go:86] 8 kube-system pods found
	I1013 22:07:21.506695  185484 system_pods.go:89] "coredns-66bc5c9577-7vlmn" [edd4eb6c-ff17-43de-a57d-d119a7cad435] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:07:21.506701  185484 system_pods.go:89] "etcd-no-preload-998398" [b8d4d15c-e804-4230-92e8-8f587ee86dbe] Running
	I1013 22:07:21.506708  185484 system_pods.go:89] "kindnet-6nvxb" [5e372ae0-66e2-4ba1-a61a-de71523b139d] Running
	I1013 22:07:21.506713  185484 system_pods.go:89] "kube-apiserver-no-preload-998398" [a614fcdc-6540-451c-93c3-b9ecb1b4e09f] Running
	I1013 22:07:21.506717  185484 system_pods.go:89] "kube-controller-manager-no-preload-998398" [08d7347e-65c5-4912-982b-1f47cecac69f] Running
	I1013 22:07:21.506721  185484 system_pods.go:89] "kube-proxy-7zmxr" [e943a88c-1969-4fb7-bbe9-03f3a93e5d6d] Running
	I1013 22:07:21.506725  185484 system_pods.go:89] "kube-scheduler-no-preload-998398" [06161d4d-6e4b-4998-ab23-07b72d2c2d2f] Running
	I1013 22:07:21.506730  185484 system_pods.go:89] "storage-provisioner" [c073142f-fc41-4606-802d-105fcab5d408] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:07:21.506751  185484 retry.go:31] will retry after 210.849129ms: missing components: kube-dns
	I1013 22:07:21.722592  185484 system_pods.go:86] 8 kube-system pods found
	I1013 22:07:21.722623  185484 system_pods.go:89] "coredns-66bc5c9577-7vlmn" [edd4eb6c-ff17-43de-a57d-d119a7cad435] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:07:21.722630  185484 system_pods.go:89] "etcd-no-preload-998398" [b8d4d15c-e804-4230-92e8-8f587ee86dbe] Running
	I1013 22:07:21.722636  185484 system_pods.go:89] "kindnet-6nvxb" [5e372ae0-66e2-4ba1-a61a-de71523b139d] Running
	I1013 22:07:21.722640  185484 system_pods.go:89] "kube-apiserver-no-preload-998398" [a614fcdc-6540-451c-93c3-b9ecb1b4e09f] Running
	I1013 22:07:21.722644  185484 system_pods.go:89] "kube-controller-manager-no-preload-998398" [08d7347e-65c5-4912-982b-1f47cecac69f] Running
	I1013 22:07:21.722648  185484 system_pods.go:89] "kube-proxy-7zmxr" [e943a88c-1969-4fb7-bbe9-03f3a93e5d6d] Running
	I1013 22:07:21.722652  185484 system_pods.go:89] "kube-scheduler-no-preload-998398" [06161d4d-6e4b-4998-ab23-07b72d2c2d2f] Running
	I1013 22:07:21.722657  185484 system_pods.go:89] "storage-provisioner" [c073142f-fc41-4606-802d-105fcab5d408] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:07:21.722671  185484 retry.go:31] will retry after 247.277947ms: missing components: kube-dns
	I1013 22:07:22.005067  185484 system_pods.go:86] 8 kube-system pods found
	I1013 22:07:22.005108  185484 system_pods.go:89] "coredns-66bc5c9577-7vlmn" [edd4eb6c-ff17-43de-a57d-d119a7cad435] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:07:22.005115  185484 system_pods.go:89] "etcd-no-preload-998398" [b8d4d15c-e804-4230-92e8-8f587ee86dbe] Running
	I1013 22:07:22.005121  185484 system_pods.go:89] "kindnet-6nvxb" [5e372ae0-66e2-4ba1-a61a-de71523b139d] Running
	I1013 22:07:22.005126  185484 system_pods.go:89] "kube-apiserver-no-preload-998398" [a614fcdc-6540-451c-93c3-b9ecb1b4e09f] Running
	I1013 22:07:22.005131  185484 system_pods.go:89] "kube-controller-manager-no-preload-998398" [08d7347e-65c5-4912-982b-1f47cecac69f] Running
	I1013 22:07:22.005135  185484 system_pods.go:89] "kube-proxy-7zmxr" [e943a88c-1969-4fb7-bbe9-03f3a93e5d6d] Running
	I1013 22:07:22.005139  185484 system_pods.go:89] "kube-scheduler-no-preload-998398" [06161d4d-6e4b-4998-ab23-07b72d2c2d2f] Running
	I1013 22:07:22.005146  185484 system_pods.go:89] "storage-provisioner" [c073142f-fc41-4606-802d-105fcab5d408] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:07:22.005165  185484 retry.go:31] will retry after 380.752365ms: missing components: kube-dns
	I1013 22:07:22.397192  185484 system_pods.go:86] 8 kube-system pods found
	I1013 22:07:22.397224  185484 system_pods.go:89] "coredns-66bc5c9577-7vlmn" [edd4eb6c-ff17-43de-a57d-d119a7cad435] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:07:22.397232  185484 system_pods.go:89] "etcd-no-preload-998398" [b8d4d15c-e804-4230-92e8-8f587ee86dbe] Running
	I1013 22:07:22.397239  185484 system_pods.go:89] "kindnet-6nvxb" [5e372ae0-66e2-4ba1-a61a-de71523b139d] Running
	I1013 22:07:22.397243  185484 system_pods.go:89] "kube-apiserver-no-preload-998398" [a614fcdc-6540-451c-93c3-b9ecb1b4e09f] Running
	I1013 22:07:22.397249  185484 system_pods.go:89] "kube-controller-manager-no-preload-998398" [08d7347e-65c5-4912-982b-1f47cecac69f] Running
	I1013 22:07:22.397253  185484 system_pods.go:89] "kube-proxy-7zmxr" [e943a88c-1969-4fb7-bbe9-03f3a93e5d6d] Running
	I1013 22:07:22.397257  185484 system_pods.go:89] "kube-scheduler-no-preload-998398" [06161d4d-6e4b-4998-ab23-07b72d2c2d2f] Running
	I1013 22:07:22.397261  185484 system_pods.go:89] "storage-provisioner" [c073142f-fc41-4606-802d-105fcab5d408] Running
	I1013 22:07:22.397268  185484 system_pods.go:126] duration metric: took 917.879156ms to wait for k8s-apps to be running ...
	I1013 22:07:22.397276  185484 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:07:22.397329  185484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:07:22.420776  185484 system_svc.go:56] duration metric: took 23.453606ms WaitForService to wait for kubelet
	I1013 22:07:22.420862  185484 kubeadm.go:586] duration metric: took 16.018910022s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:07:22.420898  185484 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:07:22.425027  185484 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:07:22.425105  185484 node_conditions.go:123] node cpu capacity is 2
	I1013 22:07:22.425132  185484 node_conditions.go:105] duration metric: took 4.216649ms to run NodePressure ...
	I1013 22:07:22.425157  185484 start.go:241] waiting for startup goroutines ...
	I1013 22:07:22.425189  185484 start.go:246] waiting for cluster config update ...
	I1013 22:07:22.425216  185484 start.go:255] writing updated cluster config ...
	I1013 22:07:22.425578  185484 ssh_runner.go:195] Run: rm -f paused
	I1013 22:07:22.431123  185484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:07:22.435422  185484 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7vlmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.442762  185484 pod_ready.go:94] pod "coredns-66bc5c9577-7vlmn" is "Ready"
	I1013 22:07:23.442838  185484 pod_ready.go:86] duration metric: took 1.007339374s for pod "coredns-66bc5c9577-7vlmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.446492  185484 pod_ready.go:83] waiting for pod "etcd-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.455299  185484 pod_ready.go:94] pod "etcd-no-preload-998398" is "Ready"
	I1013 22:07:23.455371  185484 pod_ready.go:86] duration metric: took 8.808164ms for pod "etcd-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.459119  185484 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.468213  185484 pod_ready.go:94] pod "kube-apiserver-no-preload-998398" is "Ready"
	I1013 22:07:23.468296  185484 pod_ready.go:86] duration metric: took 9.103853ms for pod "kube-apiserver-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.474292  185484 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.640214  185484 pod_ready.go:94] pod "kube-controller-manager-no-preload-998398" is "Ready"
	I1013 22:07:23.640305  185484 pod_ready.go:86] duration metric: took 165.932754ms for pod "kube-controller-manager-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:23.840222  185484 pod_ready.go:83] waiting for pod "kube-proxy-7zmxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:24.240332  185484 pod_ready.go:94] pod "kube-proxy-7zmxr" is "Ready"
	I1013 22:07:24.240409  185484 pod_ready.go:86] duration metric: took 400.106599ms for pod "kube-proxy-7zmxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:24.440514  185484 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:24.841421  185484 pod_ready.go:94] pod "kube-scheduler-no-preload-998398" is "Ready"
	I1013 22:07:24.841503  185484 pod_ready.go:86] duration metric: took 400.914349ms for pod "kube-scheduler-no-preload-998398" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:07:24.841537  185484 pod_ready.go:40] duration metric: took 2.410318413s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:07:24.936808  185484 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:07:24.939987  185484 out.go:179] * Done! kubectl is now configured to use "no-preload-998398" cluster and "default" namespace by default
	I1013 22:07:34.483602  189874 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:07:34.483657  189874 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:07:34.483745  189874 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:07:34.483834  189874 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:07:34.483870  189874 kubeadm.go:318] OS: Linux
	I1013 22:07:34.483916  189874 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:07:34.483964  189874 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:07:34.484011  189874 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:07:34.484059  189874 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:07:34.484107  189874 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:07:34.484154  189874 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:07:34.484199  189874 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:07:34.484255  189874 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:07:34.484301  189874 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:07:34.484372  189874 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:07:34.484465  189874 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:07:34.484553  189874 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:07:34.484616  189874 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:07:34.487813  189874 out.go:252]   - Generating certificates and keys ...
	I1013 22:07:34.487901  189874 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:07:34.487967  189874 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:07:34.488038  189874 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:07:34.488098  189874 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:07:34.488167  189874 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:07:34.488220  189874 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:07:34.488284  189874 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:07:34.488414  189874 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-251758 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:07:34.488470  189874 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:07:34.488595  189874 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-251758 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:07:34.488664  189874 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:07:34.488730  189874 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:07:34.488777  189874 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:07:34.488835  189874 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:07:34.488889  189874 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:07:34.488949  189874 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:07:34.489012  189874 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:07:34.489079  189874 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:07:34.489137  189874 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:07:34.489222  189874 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:07:34.489291  189874 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:07:34.492870  189874 out.go:252]   - Booting up control plane ...
	I1013 22:07:34.493124  189874 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:07:34.493385  189874 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:07:34.493464  189874 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:07:34.493579  189874 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:07:34.493681  189874 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:07:34.493801  189874 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:07:34.493894  189874 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:07:34.493937  189874 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:07:34.494088  189874 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:07:34.494203  189874 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:07:34.494268  189874 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.803762ms
	I1013 22:07:34.494368  189874 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:07:34.494458  189874 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:07:34.494556  189874 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:07:34.494643  189874 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:07:34.494726  189874 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.514300366s
	I1013 22:07:34.494799  189874 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.069839554s
	I1013 22:07:34.494873  189874 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50229395s
	I1013 22:07:34.494990  189874 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:07:34.495135  189874 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:07:34.495208  189874 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:07:34.495413  189874 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-251758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:07:34.495474  189874 kubeadm.go:318] [bootstrap-token] Using token: tbwj2k.9iztrrkubxjit74s
	I1013 22:07:34.498367  189874 out.go:252]   - Configuring RBAC rules ...
	I1013 22:07:34.498553  189874 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:07:34.498702  189874 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:07:34.498907  189874 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:07:34.499108  189874 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:07:34.499268  189874 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:07:34.499367  189874 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:07:34.499491  189874 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:07:34.499537  189874 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:07:34.499586  189874 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:07:34.499590  189874 kubeadm.go:318] 
	I1013 22:07:34.499699  189874 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:07:34.499707  189874 kubeadm.go:318] 
	I1013 22:07:34.499976  189874 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:07:34.499987  189874 kubeadm.go:318] 
	I1013 22:07:34.500028  189874 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:07:34.500092  189874 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:07:34.500146  189874 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:07:34.500149  189874 kubeadm.go:318] 
	I1013 22:07:34.500207  189874 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:07:34.500211  189874 kubeadm.go:318] 
	I1013 22:07:34.500267  189874 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:07:34.500272  189874 kubeadm.go:318] 
	I1013 22:07:34.500327  189874 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:07:34.500407  189874 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:07:34.500479  189874 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:07:34.500483  189874 kubeadm.go:318] 
	I1013 22:07:34.500573  189874 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:07:34.500655  189874 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:07:34.500659  189874 kubeadm.go:318] 
	I1013 22:07:34.500852  189874 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tbwj2k.9iztrrkubxjit74s \
	I1013 22:07:34.500966  189874 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:07:34.500989  189874 kubeadm.go:318] 	--control-plane 
	I1013 22:07:34.500993  189874 kubeadm.go:318] 
	I1013 22:07:34.501083  189874 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:07:34.501089  189874 kubeadm.go:318] 
	I1013 22:07:34.501220  189874 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tbwj2k.9iztrrkubxjit74s \
	I1013 22:07:34.501356  189874 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:07:34.501364  189874 cni.go:84] Creating CNI manager for ""
	I1013 22:07:34.501371  189874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:07:34.504488  189874 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 13 22:07:21 no-preload-998398 crio[840]: time="2025-10-13T22:07:21.955450561Z" level=info msg="Created container cc6d761dd61c0100da44b05cdd6c7e8f5760f14e0d49819caa6addcf50f77ddb: kube-system/coredns-66bc5c9577-7vlmn/coredns" id=346315c0-afb6-42b5-b903-b84139e0fc46 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:07:21 no-preload-998398 crio[840]: time="2025-10-13T22:07:21.956408756Z" level=info msg="Starting container: cc6d761dd61c0100da44b05cdd6c7e8f5760f14e0d49819caa6addcf50f77ddb" id=ade2e7ec-b4be-4fc3-af34-9a6803d28ea1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:07:21 no-preload-998398 crio[840]: time="2025-10-13T22:07:21.958019768Z" level=info msg="Started container" PID=2515 containerID=cc6d761dd61c0100da44b05cdd6c7e8f5760f14e0d49819caa6addcf50f77ddb description=kube-system/coredns-66bc5c9577-7vlmn/coredns id=ade2e7ec-b4be-4fc3-af34-9a6803d28ea1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=47b06a3a7b342c093e915f706078a40b64cc8d49465333e3b4e19ba168ebadb1
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.550295799Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9e837832-2feb-4f77-9b3d-8539043f25aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.550370086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.558862066Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bc406b7e4ada6639503b34d2392677d7788a99aad456f6f7c2ff8aafa4503c45 UID:2606a914-28cd-4c36-8cc8-6609e307bd62 NetNS:/var/run/netns/504836c8-9930-4b87-9062-fad17183c84a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028063d8}] Aliases:map[]}"
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.558901843Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.568500314Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bc406b7e4ada6639503b34d2392677d7788a99aad456f6f7c2ff8aafa4503c45 UID:2606a914-28cd-4c36-8cc8-6609e307bd62 NetNS:/var/run/netns/504836c8-9930-4b87-9062-fad17183c84a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028063d8}] Aliases:map[]}"
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.56878633Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.575774626Z" level=info msg="Ran pod sandbox bc406b7e4ada6639503b34d2392677d7788a99aad456f6f7c2ff8aafa4503c45 with infra container: default/busybox/POD" id=9e837832-2feb-4f77-9b3d-8539043f25aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.576921428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f659c5bf-007f-4e2b-b76c-371b4bcb9301 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.577120455Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f659c5bf-007f-4e2b-b76c-371b4bcb9301 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.577215533Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f659c5bf-007f-4e2b-b76c-371b4bcb9301 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.580139234Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f569f78f-da7d-4ca5-8147-5a22ed7df4cb name=/runtime.v1.ImageService/PullImage
	Oct 13 22:07:25 no-preload-998398 crio[840]: time="2025-10-13T22:07:25.583326846Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.666724626Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f569f78f-da7d-4ca5-8147-5a22ed7df4cb name=/runtime.v1.ImageService/PullImage
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.667571358Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1232c6f7-b6d2-4d10-b86d-04e7e1a0adac name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.671951367Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0524e39-066c-49fc-b74c-31bb3a50640e name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.689036509Z" level=info msg="Creating container: default/busybox/busybox" id=101450ea-fa10-4ef0-8765-9355406b3d11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.689789622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.698325745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.69876959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.726144722Z" level=info msg="Created container 18c17d49f3fdd03609248d25a97b6f099a0664276e0889bf8933285b0161f323: default/busybox/busybox" id=101450ea-fa10-4ef0-8765-9355406b3d11 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.72699654Z" level=info msg="Starting container: 18c17d49f3fdd03609248d25a97b6f099a0664276e0889bf8933285b0161f323" id=aad27aca-76f3-42b3-a6d1-bfaca40b573a name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:07:27 no-preload-998398 crio[840]: time="2025-10-13T22:07:27.729989745Z" level=info msg="Started container" PID=2567 containerID=18c17d49f3fdd03609248d25a97b6f099a0664276e0889bf8933285b0161f323 description=default/busybox/busybox id=aad27aca-76f3-42b3-a6d1-bfaca40b573a name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc406b7e4ada6639503b34d2392677d7788a99aad456f6f7c2ff8aafa4503c45
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	18c17d49f3fdd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   bc406b7e4ada6       busybox                                     default
	cc6d761dd61c0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   47b06a3a7b342       coredns-66bc5c9577-7vlmn                    kube-system
	a3ca81b8cb3ce       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   4622e92454448       storage-provisioner                         kube-system
	57af0d3644358       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   996c92d573799       kindnet-6nvxb                               kube-system
	5e5547e1e3db8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   c8844143ac48b       kube-proxy-7zmxr                            kube-system
	d3e43b8ed8433       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   6cf7cb921a88f       kube-scheduler-no-preload-998398            kube-system
	9264652f1bef7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   4aa196133d60e       kube-controller-manager-no-preload-998398   kube-system
	1bb346065d446       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   e8cf4052a2002       etcd-no-preload-998398                      kube-system
	2c4e16e9e1fff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   3f9a41976d540       kube-apiserver-no-preload-998398            kube-system
	
	
	==> coredns [cc6d761dd61c0100da44b05cdd6c7e8f5760f14e0d49819caa6addcf50f77ddb] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39649 - 40279 "HINFO IN 3381701016489076521.3975715340038577807. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016339602s
	
	
	==> describe nodes <==
	Name:               no-preload-998398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-998398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=no-preload-998398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_07_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:06:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-998398
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:07:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:07:31 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:07:31 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:07:31 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:07:31 +0000   Mon, 13 Oct 2025 22:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-998398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9ac2dd5f0f1421ea548959e3f798c4c
	  System UUID:                8be1b8dc-60be-4cac-9ebb-ba90ed9c5cdb
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-7vlmn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-998398                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-6nvxb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-998398             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-998398    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-7zmxr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-998398             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-998398 event: Registered Node no-preload-998398 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-998398 status is now: NodeReady
	
	
	==> dmesg <==
	[ +36.803698] overlayfs: idmapped layers are currently not supported
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1bb346065d446b8bf24eb6e01f57cdb856760485202fd60bed714641c4045062] <==
	{"level":"warn","ts":"2025-10-13T22:06:55.634579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.672649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.698838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.748063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.799552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.840207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.865686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.909664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.917856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.962163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:55.972645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:06:56.065400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:06.549165Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.645411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-10-13T22:07:06.549248Z","caller":"traceutil/trace.go:172","msg":"trace[210944878] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:367; }","duration":"100.754019ms","start":"2025-10-13T22:07:06.448480Z","end":"2025-10-13T22:07:06.549234Z","steps":["trace[210944878] 'agreement among raft nodes before linearized reading'  (duration: 47.872604ms)","trace[210944878] 'range keys from in-memory index tree'  (duration: 51.912454ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:07:06.549861Z","caller":"traceutil/trace.go:172","msg":"trace[1662220766] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"155.271865ms","start":"2025-10-13T22:07:06.394578Z","end":"2025-10-13T22:07:06.549850Z","steps":["trace[1662220766] 'process raft request'  (duration: 155.24396ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:07:06.551643Z","caller":"traceutil/trace.go:172","msg":"trace[930300145] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"165.853048ms","start":"2025-10-13T22:07:06.385779Z","end":"2025-10-13T22:07:06.551632Z","steps":["trace[930300145] 'process raft request'  (duration: 110.536921ms)","trace[930300145] 'compare'  (duration: 53.302418ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:07:06.551919Z","caller":"traceutil/trace.go:172","msg":"trace[789829000] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"164.583639ms","start":"2025-10-13T22:07:06.387327Z","end":"2025-10-13T22:07:06.551910Z","steps":["trace[789829000] 'process raft request'  (duration: 162.421232ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:07:06.552042Z","caller":"traceutil/trace.go:172","msg":"trace[1674959104] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"158.630837ms","start":"2025-10-13T22:07:06.393403Z","end":"2025-10-13T22:07:06.552034Z","steps":["trace[1674959104] 'process raft request'  (duration: 156.390786ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:07:06.567131Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.778384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:3988"}
	{"level":"info","ts":"2025-10-13T22:07:06.567181Z","caller":"traceutil/trace.go:172","msg":"trace[2031287534] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:371; }","duration":"109.842456ms","start":"2025-10-13T22:07:06.457329Z","end":"2025-10-13T22:07:06.567171Z","steps":["trace[2031287534] 'agreement among raft nodes before linearized reading'  (duration: 109.696335ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-13T22:07:06.571932Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.985019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-13T22:07:06.571985Z","caller":"traceutil/trace.go:172","msg":"trace[99483387] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:371; }","duration":"122.053153ms","start":"2025-10-13T22:07:06.449921Z","end":"2025-10-13T22:07:06.571975Z","steps":["trace[99483387] 'agreement among raft nodes before linearized reading'  (duration: 117.407247ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:07:06.572584Z","caller":"traceutil/trace.go:172","msg":"trace[300822707] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"124.322569ms","start":"2025-10-13T22:07:06.448249Z","end":"2025-10-13T22:07:06.572571Z","steps":["trace[300822707] 'process raft request'  (duration: 124.022728ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-13T22:07:06.692727Z","caller":"traceutil/trace.go:172","msg":"trace[43514657] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"109.540564ms","start":"2025-10-13T22:07:06.583159Z","end":"2025-10-13T22:07:06.692699Z","steps":["trace[43514657] 'process raft request'  (duration: 82.729708ms)","trace[43514657] 'compare'  (duration: 26.711552ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-13T22:07:06.856579Z","caller":"traceutil/trace.go:172","msg":"trace[290789731] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"118.651425ms","start":"2025-10-13T22:07:06.737910Z","end":"2025-10-13T22:07:06.856562Z","steps":["trace[290789731] 'process raft request'  (duration: 69.981351ms)","trace[290789731] 'compare'  (duration: 48.044629ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:07:36 up  1:49,  0 user,  load average: 4.12, 2.74, 2.16
	Linux no-preload-998398 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [57af0d36443589853fcad5f001614428a15fae00571f3b3a890eace8f3da89b3] <==
	I1013 22:07:11.012428       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:07:11.013586       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:07:11.013743       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:07:11.013762       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:07:11.013775       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:07:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:07:11.307414       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:07:11.309176       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:07:11.309242       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:07:11.309406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1013 22:07:11.510330       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:07:11.510362       1 metrics.go:72] Registering metrics
	I1013 22:07:11.510412       1 controller.go:711] "Syncing nftables rules"
	I1013 22:07:21.312453       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:07:21.312643       1 main.go:301] handling current node
	I1013 22:07:31.307894       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:07:31.307996       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c4e16e9e1fff46676a17cc54805a119fc2b1b08442a3b2c67c042f5aa74f7ad] <==
	I1013 22:06:57.456244       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:06:57.470844       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:06:57.470918       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:06:57.476967       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:06:57.478825       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 22:06:57.579585       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:06:57.579667       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:06:58.148531       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:06:58.155848       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:06:58.155870       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:06:59.364829       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:06:59.434805       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:06:59.574757       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:06:59.585263       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1013 22:06:59.586511       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:06:59.592020       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:07:00.249535       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:07:00.820044       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:07:00.869123       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:07:00.904795       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:07:05.313238       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:07:05.352766       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:07:06.074303       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 22:07:06.308302       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1013 22:07:34.363311       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:49402: use of closed network connection
	
	
	==> kube-controller-manager [9264652f1bef7a86103dd00f97068a37742120cae09f2c6d9fb2141418be324a] <==
	I1013 22:07:05.278839       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 22:07:05.278883       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 22:07:05.282855       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:07:05.283525       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:07:05.283547       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:07:05.283592       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:07:05.283628       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 22:07:05.286155       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:07:05.287319       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:07:05.296738       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:07:05.296908       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:07:05.297265       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:07:05.300412       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:07:05.314264       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:07:05.314283       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:07:05.314289       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:07:05.321984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:07:05.322037       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:07:05.322051       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:07:05.322065       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:07:05.322075       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:07:05.326222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:07:05.326271       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:07:05.326535       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:07:25.281510       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5e5547e1e3db884b4c73b3976c34665c5a29a4947aeecadedbb2d8732bab46fd] <==
	I1013 22:07:07.346178       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:07:07.616447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:07:07.717447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:07:07.717601       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:07:07.717701       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:07:07.860274       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:07:07.860335       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:07:07.864889       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:07:07.865154       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:07:07.865169       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:07:07.866502       1 config.go:200] "Starting service config controller"
	I1013 22:07:07.866511       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:07:07.866527       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:07:07.866532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:07:07.866554       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:07:07.866558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:07:07.867160       1 config.go:309] "Starting node config controller"
	I1013 22:07:07.867167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:07:07.867172       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:07:07.968202       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:07:07.968249       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:07:07.968288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d3e43b8ed843313e4214a3f9babe0bc0dd2ad487daac47b52ca777a4c036e598] <==
	I1013 22:06:55.763468       1 serving.go:386] Generated self-signed cert in-memory
	W1013 22:06:59.129545       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1013 22:06:59.129590       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1013 22:06:59.129614       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1013 22:06:59.129622       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1013 22:06:59.184830       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:06:59.185069       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:06:59.188363       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:06:59.188458       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:06:59.191015       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:06:59.188483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 22:06:59.232788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1013 22:07:00.192765       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:07:02 no-preload-998398 kubelet[2021]: I1013 22:07:02.126424    2021 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-998398" podStartSLOduration=1.12640306 podStartE2EDuration="1.12640306s" podCreationTimestamp="2025-10-13 22:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:07:02.073424063 +0000 UTC m=+1.364584771" watchObservedRunningTime="2025-10-13 22:07:02.12640306 +0000 UTC m=+1.417563776"
	Oct 13 22:07:05 no-preload-998398 kubelet[2021]: I1013 22:07:05.291219    2021 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 22:07:05 no-preload-998398 kubelet[2021]: I1013 22:07:05.293920    2021 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.332969    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e372ae0-66e2-4ba1-a61a-de71523b139d-xtables-lock\") pod \"kindnet-6nvxb\" (UID: \"5e372ae0-66e2-4ba1-a61a-de71523b139d\") " pod="kube-system/kindnet-6nvxb"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.333014    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjg5x\" (UniqueName: \"kubernetes.io/projected/5e372ae0-66e2-4ba1-a61a-de71523b139d-kube-api-access-mjg5x\") pod \"kindnet-6nvxb\" (UID: \"5e372ae0-66e2-4ba1-a61a-de71523b139d\") " pod="kube-system/kindnet-6nvxb"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.333041    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5e372ae0-66e2-4ba1-a61a-de71523b139d-cni-cfg\") pod \"kindnet-6nvxb\" (UID: \"5e372ae0-66e2-4ba1-a61a-de71523b139d\") " pod="kube-system/kindnet-6nvxb"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.333061    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e372ae0-66e2-4ba1-a61a-de71523b139d-lib-modules\") pod \"kindnet-6nvxb\" (UID: \"5e372ae0-66e2-4ba1-a61a-de71523b139d\") " pod="kube-system/kindnet-6nvxb"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.434791    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e943a88c-1969-4fb7-bbe9-03f3a93e5d6d-kube-proxy\") pod \"kube-proxy-7zmxr\" (UID: \"e943a88c-1969-4fb7-bbe9-03f3a93e5d6d\") " pod="kube-system/kube-proxy-7zmxr"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.434840    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e943a88c-1969-4fb7-bbe9-03f3a93e5d6d-lib-modules\") pod \"kube-proxy-7zmxr\" (UID: \"e943a88c-1969-4fb7-bbe9-03f3a93e5d6d\") " pod="kube-system/kube-proxy-7zmxr"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.434863    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fht9k\" (UniqueName: \"kubernetes.io/projected/e943a88c-1969-4fb7-bbe9-03f3a93e5d6d-kube-api-access-fht9k\") pod \"kube-proxy-7zmxr\" (UID: \"e943a88c-1969-4fb7-bbe9-03f3a93e5d6d\") " pod="kube-system/kube-proxy-7zmxr"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.434890    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e943a88c-1969-4fb7-bbe9-03f3a93e5d6d-xtables-lock\") pod \"kube-proxy-7zmxr\" (UID: \"e943a88c-1969-4fb7-bbe9-03f3a93e5d6d\") " pod="kube-system/kube-proxy-7zmxr"
	Oct 13 22:07:06 no-preload-998398 kubelet[2021]: I1013 22:07:06.626046    2021 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 22:07:09 no-preload-998398 kubelet[2021]: I1013 22:07:09.219415    2021 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7zmxr" podStartSLOduration=3.219396114 podStartE2EDuration="3.219396114s" podCreationTimestamp="2025-10-13 22:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:07:08.194769935 +0000 UTC m=+7.485930643" watchObservedRunningTime="2025-10-13 22:07:09.219396114 +0000 UTC m=+8.510556814"
	Oct 13 22:07:11 no-preload-998398 kubelet[2021]: I1013 22:07:11.303453    2021 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6nvxb" podStartSLOduration=1.4196292449999999 podStartE2EDuration="5.303434597s" podCreationTimestamp="2025-10-13 22:07:06 +0000 UTC" firstStartedPulling="2025-10-13 22:07:07.008253085 +0000 UTC m=+6.299413793" lastFinishedPulling="2025-10-13 22:07:10.892058429 +0000 UTC m=+10.183219145" observedRunningTime="2025-10-13 22:07:11.142778417 +0000 UTC m=+10.433939125" watchObservedRunningTime="2025-10-13 22:07:11.303434597 +0000 UTC m=+10.594595305"
	Oct 13 22:07:21 no-preload-998398 kubelet[2021]: I1013 22:07:21.383774    2021 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 22:07:21 no-preload-998398 kubelet[2021]: I1013 22:07:21.450387    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdx57\" (UniqueName: \"kubernetes.io/projected/c073142f-fc41-4606-802d-105fcab5d408-kube-api-access-jdx57\") pod \"storage-provisioner\" (UID: \"c073142f-fc41-4606-802d-105fcab5d408\") " pod="kube-system/storage-provisioner"
	Oct 13 22:07:21 no-preload-998398 kubelet[2021]: I1013 22:07:21.450638    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c073142f-fc41-4606-802d-105fcab5d408-tmp\") pod \"storage-provisioner\" (UID: \"c073142f-fc41-4606-802d-105fcab5d408\") " pod="kube-system/storage-provisioner"
	Oct 13 22:07:21 no-preload-998398 kubelet[2021]: I1013 22:07:21.551593    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edd4eb6c-ff17-43de-a57d-d119a7cad435-config-volume\") pod \"coredns-66bc5c9577-7vlmn\" (UID: \"edd4eb6c-ff17-43de-a57d-d119a7cad435\") " pod="kube-system/coredns-66bc5c9577-7vlmn"
	Oct 13 22:07:21 no-preload-998398 kubelet[2021]: I1013 22:07:21.552074    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx7f6\" (UniqueName: \"kubernetes.io/projected/edd4eb6c-ff17-43de-a57d-d119a7cad435-kube-api-access-cx7f6\") pod \"coredns-66bc5c9577-7vlmn\" (UID: \"edd4eb6c-ff17-43de-a57d-d119a7cad435\") " pod="kube-system/coredns-66bc5c9577-7vlmn"
	Oct 13 22:07:21 no-preload-998398 kubelet[2021]: W1013 22:07:21.868430    2021 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/crio-47b06a3a7b342c093e915f706078a40b64cc8d49465333e3b4e19ba168ebadb1 WatchSource:0}: Error finding container 47b06a3a7b342c093e915f706078a40b64cc8d49465333e3b4e19ba168ebadb1: Status 404 returned error can't find the container with id 47b06a3a7b342c093e915f706078a40b64cc8d49465333e3b4e19ba168ebadb1
	Oct 13 22:07:22 no-preload-998398 kubelet[2021]: I1013 22:07:22.210717    2021 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7vlmn" podStartSLOduration=16.210690726 podStartE2EDuration="16.210690726s" podCreationTimestamp="2025-10-13 22:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:07:22.186247024 +0000 UTC m=+21.477407740" watchObservedRunningTime="2025-10-13 22:07:22.210690726 +0000 UTC m=+21.501851434"
	Oct 13 22:07:23 no-preload-998398 kubelet[2021]: I1013 22:07:23.187577    2021 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.187554517 podStartE2EDuration="15.187554517s" podCreationTimestamp="2025-10-13 22:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:07:22.212801426 +0000 UTC m=+21.503962134" watchObservedRunningTime="2025-10-13 22:07:23.187554517 +0000 UTC m=+22.478715241"
	Oct 13 22:07:25 no-preload-998398 kubelet[2021]: I1013 22:07:25.286472    2021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hrhh\" (UniqueName: \"kubernetes.io/projected/2606a914-28cd-4c36-8cc8-6609e307bd62-kube-api-access-9hrhh\") pod \"busybox\" (UID: \"2606a914-28cd-4c36-8cc8-6609e307bd62\") " pod="default/busybox"
	Oct 13 22:07:25 no-preload-998398 kubelet[2021]: W1013 22:07:25.573522    2021 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/crio-bc406b7e4ada6639503b34d2392677d7788a99aad456f6f7c2ff8aafa4503c45 WatchSource:0}: Error finding container bc406b7e4ada6639503b34d2392677d7788a99aad456f6f7c2ff8aafa4503c45: Status 404 returned error can't find the container with id bc406b7e4ada6639503b34d2392677d7788a99aad456f6f7c2ff8aafa4503c45
	Oct 13 22:07:28 no-preload-998398 kubelet[2021]: I1013 22:07:28.193211    2021 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.102261228 podStartE2EDuration="3.193193104s" podCreationTimestamp="2025-10-13 22:07:25 +0000 UTC" firstStartedPulling="2025-10-13 22:07:25.577480634 +0000 UTC m=+24.868641342" lastFinishedPulling="2025-10-13 22:07:27.66841251 +0000 UTC m=+26.959573218" observedRunningTime="2025-10-13 22:07:28.192215365 +0000 UTC m=+27.483376081" watchObservedRunningTime="2025-10-13 22:07:28.193193104 +0000 UTC m=+27.484353812"
	
	
	==> storage-provisioner [a3ca81b8cb3cea2e7632fb4efeb9486faad87120746497b5eb48a6a28bc60f55] <==
	I1013 22:07:21.849159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:07:21.881837       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:07:21.881882       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:07:21.920026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:21.930175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:07:21.930335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:07:21.933284       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-998398_9996df41-ffdb-47b5-9d85-e9750f485517!
	I1013 22:07:21.950111       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8efc1af7-8267-476e-8e56-255e4023ebf3", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-998398_9996df41-ffdb-47b5-9d85-e9750f485517 became leader
	W1013 22:07:21.951241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:21.985153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:07:22.033697       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-998398_9996df41-ffdb-47b5-9d85-e9750f485517!
	W1013 22:07:23.988777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:23.996443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:25.999266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:26.009117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:28.012330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:28.017990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:30.022493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:30.032404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:32.036283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:32.041220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:34.045177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:34.050463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:36.055569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:07:36.076612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998398 -n no-preload-998398
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-998398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (293.46404ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:08:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-251758 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-251758 describe deploy/metrics-server -n kube-system: exit status 1 (83.731492ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-251758 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-251758
helpers_test.go:243: (dbg) docker inspect embed-certs-251758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396",
	        "Created": "2025-10-13T22:07:07.277688258Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190490,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:07:07.341531892Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/hostname",
	        "HostsPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/hosts",
	        "LogPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396-json.log",
	        "Name": "/embed-certs-251758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-251758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-251758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396",
	                "LowerDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-251758",
	                "Source": "/var/lib/docker/volumes/embed-certs-251758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-251758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-251758",
	                "name.minikube.sigs.k8s.io": "embed-certs-251758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6226bf5c6dbb7a6bd3355b5338c11d2d25bf31342071663f6827f320d96dc98",
	            "SandboxKey": "/var/run/docker/netns/f6226bf5c6db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-251758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:a7:a3:a4:b6:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b9096ba29d296c438f9a557fd2db13e4e114de39426eb54481a5b79f96f151ea",
	                    "EndpointID": "929bffab992d7e030168f31f4303178e4b63f98a2925f7749581f61b44fbc9ef",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-251758",
	                        "bce2b62de8b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-251758 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-251758 logs -n 25: (1.323267158s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-257205                                                                                                                                                                                                                  │ force-systemd-flag-257205 │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:02 UTC │ 13 Oct 25 22:02 UTC │
	│ delete  │ -p force-systemd-env-312094                                                                                                                                                                                                                   │ force-systemd-env-312094  │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:03 UTC │
	│ start   │ -p cert-options-194931 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:03 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ cert-options-194931 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ ssh     │ -p cert-options-194931 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-options-194931                                                                                                                                                                                                                        │ cert-options-194931       │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ stop    │ -p old-k8s-version-061725 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-061725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725    │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758        │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ stop    │ -p no-preload-998398 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ addons  │ enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398         │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758        │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:07:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:07:49.266720  193660 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:07:49.266852  193660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:07:49.266863  193660 out.go:374] Setting ErrFile to fd 2...
	I1013 22:07:49.266868  193660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:07:49.267110  193660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:07:49.267464  193660 out.go:368] Setting JSON to false
	I1013 22:07:49.268412  193660 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6604,"bootTime":1760386666,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:07:49.268489  193660 start.go:141] virtualization:  
	I1013 22:07:49.273372  193660 out.go:179] * [no-preload-998398] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:07:49.276498  193660 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:07:49.276551  193660 notify.go:220] Checking for updates...
	I1013 22:07:49.282609  193660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:07:49.285525  193660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:07:49.288423  193660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:07:49.291267  193660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:07:49.294130  193660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:07:49.297593  193660 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:07:49.298191  193660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:07:49.329526  193660 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:07:49.329657  193660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:07:49.385072  193660 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:07:49.375151834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:07:49.385177  193660 docker.go:318] overlay module found
	I1013 22:07:49.388237  193660 out.go:179] * Using the docker driver based on existing profile
	I1013 22:07:49.390995  193660 start.go:305] selected driver: docker
	I1013 22:07:49.391013  193660 start.go:925] validating driver "docker" against &{Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:07:49.391139  193660 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:07:49.392009  193660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:07:49.456979  193660 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:07:49.445629003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:07:49.457370  193660 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:07:49.457402  193660 cni.go:84] Creating CNI manager for ""
	I1013 22:07:49.457464  193660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:07:49.457508  193660 start.go:349] cluster config:
	{Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:07:49.460868  193660 out.go:179] * Starting "no-preload-998398" primary control-plane node in "no-preload-998398" cluster
	I1013 22:07:49.463642  193660 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:07:49.466557  193660 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:07:49.469429  193660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:07:49.469458  193660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:07:49.469555  193660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json ...
	I1013 22:07:49.469866  193660 cache.go:107] acquiring lock: {Name:mk9e23294529848fca5421602e65fa540d2ffe9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.469939  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1013 22:07:49.469954  193660 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.441µs
	I1013 22:07:49.469968  193660 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1013 22:07:49.469983  193660 cache.go:107] acquiring lock: {Name:mkb3086799a14ff1ebfc52e9ac9fba7b29bb30fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.470021  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1013 22:07:49.470032  193660 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 49.935µs
	I1013 22:07:49.470038  193660 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1013 22:07:49.470057  193660 cache.go:107] acquiring lock: {Name:mkee07c6d8760320632919489ff1ecb2e0d22d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.470089  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1013 22:07:49.470098  193660 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 42.518µs
	I1013 22:07:49.470105  193660 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1013 22:07:49.470114  193660 cache.go:107] acquiring lock: {Name:mkfa4d23a7d0256f3cdf1cb2f33382ba7dbbfc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.470144  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1013 22:07:49.470224  193660 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 109.839µs
	I1013 22:07:49.470256  193660 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1013 22:07:49.470271  193660 cache.go:107] acquiring lock: {Name:mk62ce0678b4b3038f2e150b1ed151bc360f3641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.470312  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1013 22:07:49.470322  193660 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 53.718µs
	I1013 22:07:49.470348  193660 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1013 22:07:49.470378  193660 cache.go:107] acquiring lock: {Name:mkb1d39c539d858c9b1c08f39ea3287bd6d91313 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.470416  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1013 22:07:49.470426  193660 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 68.388µs
	I1013 22:07:49.470433  193660 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1013 22:07:49.470471  193660 cache.go:107] acquiring lock: {Name:mk2eb24896c7f2889da7dd223ade65489103932b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.470508  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1013 22:07:49.470519  193660 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 50.814µs
	I1013 22:07:49.470525  193660 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1013 22:07:49.470591  193660 cache.go:107] acquiring lock: {Name:mkc9af2ce906bde484aa6a725326e8aa7fddb608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.470633  193660 cache.go:115] /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1013 22:07:49.470644  193660 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 55.957µs
	I1013 22:07:49.470661  193660 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1013 22:07:49.470673  193660 cache.go:87] Successfully saved all images to host disk.
	I1013 22:07:49.493983  193660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:07:49.494007  193660 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:07:49.494026  193660 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:07:49.494054  193660 start.go:360] acquireMachinesLock for no-preload-998398: {Name:mk31dc6d65eb1bd4951f5e4881803fab3fbc7962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:07:49.494123  193660 start.go:364] duration metric: took 48.623µs to acquireMachinesLock for "no-preload-998398"
	I1013 22:07:49.494147  193660 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:07:49.494166  193660 fix.go:54] fixHost starting: 
	I1013 22:07:49.494410  193660 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:49.514805  193660 fix.go:112] recreateIfNeeded on no-preload-998398: state=Stopped err=<nil>
	W1013 22:07:49.514834  193660 fix.go:138] unexpected machine state, will restart: <nil>
	W1013 22:07:46.206015  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:07:48.699945  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:07:50.701060  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	I1013 22:07:49.518176  193660 out.go:252] * Restarting existing docker container for "no-preload-998398" ...
	I1013 22:07:49.518260  193660 cli_runner.go:164] Run: docker start no-preload-998398
	I1013 22:07:49.769168  193660 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:49.795068  193660 kic.go:430] container "no-preload-998398" state is running.
	I1013 22:07:49.795468  193660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:07:49.832624  193660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/config.json ...
	I1013 22:07:49.832851  193660 machine.go:93] provisionDockerMachine start ...
	I1013 22:07:49.832913  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:49.856761  193660 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:49.857072  193660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1013 22:07:49.857082  193660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:07:49.858252  193660 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:07:53.008972  193660 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-998398
	
	I1013 22:07:53.008999  193660 ubuntu.go:182] provisioning hostname "no-preload-998398"
	I1013 22:07:53.009065  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:53.026984  193660 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:53.027294  193660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1013 22:07:53.027310  193660 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-998398 && echo "no-preload-998398" | sudo tee /etc/hostname
	I1013 22:07:53.180926  193660 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-998398
	
	I1013 22:07:53.181003  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:53.198692  193660 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:53.199102  193660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1013 22:07:53.199141  193660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-998398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-998398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-998398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:07:53.347820  193660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:07:53.347847  193660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:07:53.347877  193660 ubuntu.go:190] setting up certificates
	I1013 22:07:53.347886  193660 provision.go:84] configureAuth start
	I1013 22:07:53.347944  193660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:07:53.365759  193660 provision.go:143] copyHostCerts
	I1013 22:07:53.365823  193660 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:07:53.365847  193660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:07:53.365925  193660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:07:53.366028  193660 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:07:53.366038  193660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:07:53.366067  193660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:07:53.366133  193660 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:07:53.366141  193660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:07:53.366166  193660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:07:53.366224  193660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.no-preload-998398 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-998398]
	I1013 22:07:53.534682  193660 provision.go:177] copyRemoteCerts
	I1013 22:07:53.534754  193660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:07:53.534798  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:53.552243  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:53.657292  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:07:53.677795  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:07:53.695858  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:07:53.714142  193660 provision.go:87] duration metric: took 366.235538ms to configureAuth
	I1013 22:07:53.714168  193660 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:07:53.714362  193660 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:07:53.714452  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:53.731669  193660 main.go:141] libmachine: Using SSH client type: native
	I1013 22:07:53.731999  193660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1013 22:07:53.732021  193660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:07:54.056546  193660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:07:54.056647  193660 machine.go:96] duration metric: took 4.223779089s to provisionDockerMachine
	I1013 22:07:54.056674  193660 start.go:293] postStartSetup for "no-preload-998398" (driver="docker")
	I1013 22:07:54.056713  193660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:07:54.056814  193660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:07:54.056896  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:54.079165  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:54.183452  193660 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:07:54.186675  193660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:07:54.186704  193660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:07:54.186715  193660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:07:54.186763  193660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:07:54.186845  193660 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:07:54.186948  193660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:07:54.193713  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:07:54.213314  193660 start.go:296] duration metric: took 156.597357ms for postStartSetup
	I1013 22:07:54.213386  193660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:07:54.213431  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:54.232886  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:54.337551  193660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:07:54.342335  193660 fix.go:56] duration metric: took 4.848171924s for fixHost
	I1013 22:07:54.342361  193660 start.go:83] releasing machines lock for "no-preload-998398", held for 4.848226913s
	I1013 22:07:54.342441  193660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-998398
	I1013 22:07:54.358235  193660 ssh_runner.go:195] Run: cat /version.json
	I1013 22:07:54.358288  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:54.358562  193660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:07:54.358618  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:54.376354  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:54.385668  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:54.475497  193660 ssh_runner.go:195] Run: systemctl --version
	I1013 22:07:54.582283  193660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:07:54.619364  193660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:07:54.623696  193660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:07:54.623819  193660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:07:54.632190  193660 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:07:54.632269  193660 start.go:495] detecting cgroup driver to use...
	I1013 22:07:54.632308  193660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:07:54.632358  193660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:07:54.647865  193660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:07:54.660920  193660 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:07:54.660984  193660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:07:54.676134  193660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:07:54.689326  193660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:07:54.816758  193660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:07:54.929582  193660 docker.go:234] disabling docker service ...
	I1013 22:07:54.929657  193660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:07:54.944279  193660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:07:54.957179  193660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:07:55.084564  193660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:07:55.203355  193660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:07:55.216272  193660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:07:55.231477  193660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:07:55.231561  193660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:55.241100  193660 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:07:55.241214  193660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:55.250239  193660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:55.260016  193660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:55.269167  193660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:07:55.278057  193660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:55.287609  193660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:55.296460  193660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:07:55.305671  193660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:07:55.313323  193660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:07:55.321124  193660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:07:55.436990  193660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:07:55.580169  193660 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:07:55.580236  193660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:07:55.584242  193660 start.go:563] Will wait 60s for crictl version
	I1013 22:07:55.584315  193660 ssh_runner.go:195] Run: which crictl
	I1013 22:07:55.587577  193660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:07:55.611489  193660 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:07:55.611647  193660 ssh_runner.go:195] Run: crio --version
	I1013 22:07:55.644169  193660 ssh_runner.go:195] Run: crio --version
	I1013 22:07:55.680661  193660 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1013 22:07:53.203465  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:07:55.701907  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	I1013 22:07:55.683475  193660 cli_runner.go:164] Run: docker network inspect no-preload-998398 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:07:55.701122  193660 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:07:55.704972  193660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:07:55.714001  193660 kubeadm.go:883] updating cluster {Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:07:55.714115  193660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:07:55.714154  193660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:07:55.745537  193660 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:07:55.745559  193660 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:07:55.745567  193660 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1013 22:07:55.745658  193660 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-998398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:07:55.745736  193660 ssh_runner.go:195] Run: crio config
	I1013 22:07:55.808092  193660 cni.go:84] Creating CNI manager for ""
	I1013 22:07:55.808176  193660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:07:55.808212  193660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:07:55.808276  193660 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-998398 NodeName:no-preload-998398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:07:55.808436  193660 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-998398"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:07:55.808522  193660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:07:55.821428  193660 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:07:55.821542  193660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:07:55.828766  193660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:07:55.841276  193660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:07:55.854197  193660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1013 22:07:55.867763  193660 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:07:55.871246  193660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:07:55.880978  193660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:07:56.001895  193660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:07:56.028232  193660 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398 for IP: 192.168.76.2
	I1013 22:07:56.028267  193660 certs.go:195] generating shared ca certs ...
	I1013 22:07:56.028284  193660 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:56.028425  193660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:07:56.028486  193660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:07:56.028499  193660 certs.go:257] generating profile certs ...
	I1013 22:07:56.028590  193660 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.key
	I1013 22:07:56.028662  193660 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key.fe88bb21
	I1013 22:07:56.028705  193660 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key
	I1013 22:07:56.028852  193660 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:07:56.028892  193660 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:07:56.028906  193660 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:07:56.028932  193660 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:07:56.028966  193660 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:07:56.028995  193660 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:07:56.029045  193660 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:07:56.029673  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:07:56.050811  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:07:56.072372  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:07:56.094884  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:07:56.116426  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:07:56.144907  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:07:56.171842  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:07:56.197359  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:07:56.222896  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:07:56.241711  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:07:56.263300  193660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:07:56.291912  193660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:07:56.306187  193660 ssh_runner.go:195] Run: openssl version
	I1013 22:07:56.312496  193660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:07:56.322360  193660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:07:56.326382  193660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:07:56.326449  193660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:07:56.372125  193660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:07:56.380098  193660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:07:56.388450  193660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:07:56.392147  193660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:07:56.392250  193660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:07:56.433040  193660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:07:56.440814  193660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:07:56.449124  193660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:07:56.452797  193660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:07:56.452861  193660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:07:56.494245  193660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:07:56.502071  193660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:07:56.505730  193660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:07:56.547498  193660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:07:56.591201  193660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:07:56.636136  193660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:07:56.681475  193660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:07:56.734237  193660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:07:56.822027  193660 kubeadm.go:400] StartCluster: {Name:no-preload-998398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-998398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:07:56.822161  193660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:07:56.822258  193660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:07:56.901526  193660 cri.go:89] found id: "2346fd5f183a812bd5bdb156c3e135f978ddbc4289db62f027930721e4ad02ce"
	I1013 22:07:56.901594  193660 cri.go:89] found id: "8b465dfa7766b973d11779f1b004b6b9862a3752d706b497e8911ef92d698e5d"
	I1013 22:07:56.901611  193660 cri.go:89] found id: "fef06fef22a944406c398dc34d304bcc991484835183ae13edd49c795fa70c38"
	I1013 22:07:56.901629  193660 cri.go:89] found id: "6d4f60f057762a629c080a53d21fd933695b93f082a2b7fd989f3d4229ac75c3"
	I1013 22:07:56.901645  193660 cri.go:89] found id: ""
	I1013 22:07:56.901716  193660 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:07:56.916987  193660 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:07:56Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:07:56.917116  193660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:07:56.925879  193660 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:07:56.925942  193660 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:07:56.926017  193660 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:07:56.939934  193660 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:07:56.940836  193660 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-998398" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:07:56.941407  193660 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-2495/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-998398" cluster setting kubeconfig missing "no-preload-998398" context setting]
	I1013 22:07:56.942238  193660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:56.944378  193660 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:07:56.961050  193660 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 22:07:56.961121  193660 kubeadm.go:601] duration metric: took 35.160322ms to restartPrimaryControlPlane
	I1013 22:07:56.961147  193660 kubeadm.go:402] duration metric: took 139.129438ms to StartCluster
	I1013 22:07:56.961197  193660 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:56.961271  193660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:07:56.962754  193660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:07:56.963026  193660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:07:56.963422  193660 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:07:56.963489  193660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:07:56.963581  193660 addons.go:69] Setting storage-provisioner=true in profile "no-preload-998398"
	I1013 22:07:56.963609  193660 addons.go:238] Setting addon storage-provisioner=true in "no-preload-998398"
	W1013 22:07:56.963628  193660 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:07:56.963830  193660 host.go:66] Checking if "no-preload-998398" exists ...
	I1013 22:07:56.964328  193660 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:56.963746  193660 addons.go:69] Setting dashboard=true in profile "no-preload-998398"
	I1013 22:07:56.964788  193660 addons.go:238] Setting addon dashboard=true in "no-preload-998398"
	W1013 22:07:56.964798  193660 addons.go:247] addon dashboard should already be in state true
	I1013 22:07:56.964821  193660 host.go:66] Checking if "no-preload-998398" exists ...
	I1013 22:07:56.963755  193660 addons.go:69] Setting default-storageclass=true in profile "no-preload-998398"
	I1013 22:07:56.964902  193660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-998398"
	I1013 22:07:56.965203  193660 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:56.965307  193660 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:56.969242  193660 out.go:179] * Verifying Kubernetes components...
	I1013 22:07:56.972315  193660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:07:57.030360  193660 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 22:07:57.033886  193660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:07:57.036994  193660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:07:57.037019  193660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:07:57.037080  193660 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 22:07:57.037084  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:57.039290  193660 addons.go:238] Setting addon default-storageclass=true in "no-preload-998398"
	W1013 22:07:57.039311  193660 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:07:57.039335  193660 host.go:66] Checking if "no-preload-998398" exists ...
	I1013 22:07:57.039753  193660 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:07:57.040110  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:07:57.040127  193660 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:07:57.040173  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:57.086330  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:57.096049  193660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:07:57.096075  193660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:07:57.096137  193660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:07:57.099479  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:57.128879  193660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:07:57.342331  193660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:07:57.374859  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:07:57.374885  193660 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:07:57.376030  193660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:07:57.402610  193660 node_ready.go:35] waiting up to 6m0s for node "no-preload-998398" to be "Ready" ...
	I1013 22:07:57.414492  193660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:07:57.433652  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:07:57.433720  193660 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:07:57.531438  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:07:57.531464  193660 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:07:57.618572  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:07:57.618592  193660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:07:57.638555  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:07:57.638575  193660 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:07:57.667273  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:07:57.667294  193660 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:07:57.693933  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:07:57.693998  193660 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:07:57.716832  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:07:57.716901  193660 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:07:57.735342  193660 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:07:57.735412  193660 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:07:57.754463  193660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1013 22:07:58.199879  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:00.205752  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	I1013 22:08:01.374719  193660 node_ready.go:49] node "no-preload-998398" is "Ready"
	I1013 22:08:01.374747  193660 node_ready.go:38] duration metric: took 3.972106591s for node "no-preload-998398" to be "Ready" ...
	I1013 22:08:01.374765  193660 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:08:01.374910  193660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:08:03.074156  193660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.698073747s)
	I1013 22:08:03.074263  193660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.659750125s)
	I1013 22:08:03.074616  193660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.320061975s)
	I1013 22:08:03.074874  193660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.699942769s)
	I1013 22:08:03.074950  193660 api_server.go:72] duration metric: took 6.111801732s to wait for apiserver process to appear ...
	I1013 22:08:03.074971  193660 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:08:03.075019  193660 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1013 22:08:03.077949  193660 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-998398 addons enable metrics-server
	
	I1013 22:08:03.081903  193660 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1013 22:08:03.084802  193660 addons.go:514] duration metric: took 6.121299323s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1013 22:08:03.085151  193660 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1013 22:08:03.086169  193660 api_server.go:141] control plane version: v1.34.1
	I1013 22:08:03.086195  193660 api_server.go:131] duration metric: took 11.187329ms to wait for apiserver health ...
	I1013 22:08:03.086204  193660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:08:03.089924  193660 system_pods.go:59] 8 kube-system pods found
	I1013 22:08:03.089967  193660 system_pods.go:61] "coredns-66bc5c9577-7vlmn" [edd4eb6c-ff17-43de-a57d-d119a7cad435] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:08:03.089977  193660 system_pods.go:61] "etcd-no-preload-998398" [b8d4d15c-e804-4230-92e8-8f587ee86dbe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:08:03.089983  193660 system_pods.go:61] "kindnet-6nvxb" [5e372ae0-66e2-4ba1-a61a-de71523b139d] Running
	I1013 22:08:03.089990  193660 system_pods.go:61] "kube-apiserver-no-preload-998398" [a614fcdc-6540-451c-93c3-b9ecb1b4e09f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:08:03.089997  193660 system_pods.go:61] "kube-controller-manager-no-preload-998398" [08d7347e-65c5-4912-982b-1f47cecac69f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:08:03.090001  193660 system_pods.go:61] "kube-proxy-7zmxr" [e943a88c-1969-4fb7-bbe9-03f3a93e5d6d] Running
	I1013 22:08:03.090008  193660 system_pods.go:61] "kube-scheduler-no-preload-998398" [06161d4d-6e4b-4998-ab23-07b72d2c2d2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:08:03.090015  193660 system_pods.go:61] "storage-provisioner" [c073142f-fc41-4606-802d-105fcab5d408] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:08:03.090021  193660 system_pods.go:74] duration metric: took 3.81126ms to wait for pod list to return data ...
	I1013 22:08:03.090034  193660 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:08:03.095156  193660 default_sa.go:45] found service account: "default"
	I1013 22:08:03.095185  193660 default_sa.go:55] duration metric: took 5.144987ms for default service account to be created ...
	I1013 22:08:03.095195  193660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:08:03.100515  193660 system_pods.go:86] 8 kube-system pods found
	I1013 22:08:03.100549  193660 system_pods.go:89] "coredns-66bc5c9577-7vlmn" [edd4eb6c-ff17-43de-a57d-d119a7cad435] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:08:03.100559  193660 system_pods.go:89] "etcd-no-preload-998398" [b8d4d15c-e804-4230-92e8-8f587ee86dbe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:08:03.100566  193660 system_pods.go:89] "kindnet-6nvxb" [5e372ae0-66e2-4ba1-a61a-de71523b139d] Running
	I1013 22:08:03.100573  193660 system_pods.go:89] "kube-apiserver-no-preload-998398" [a614fcdc-6540-451c-93c3-b9ecb1b4e09f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:08:03.100579  193660 system_pods.go:89] "kube-controller-manager-no-preload-998398" [08d7347e-65c5-4912-982b-1f47cecac69f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:08:03.100584  193660 system_pods.go:89] "kube-proxy-7zmxr" [e943a88c-1969-4fb7-bbe9-03f3a93e5d6d] Running
	I1013 22:08:03.100590  193660 system_pods.go:89] "kube-scheduler-no-preload-998398" [06161d4d-6e4b-4998-ab23-07b72d2c2d2f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:08:03.100595  193660 system_pods.go:89] "storage-provisioner" [c073142f-fc41-4606-802d-105fcab5d408] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:08:03.100602  193660 system_pods.go:126] duration metric: took 5.402416ms to wait for k8s-apps to be running ...
	I1013 22:08:03.100612  193660 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:08:03.100675  193660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:08:03.115620  193660 system_svc.go:56] duration metric: took 14.999565ms WaitForService to wait for kubelet
	I1013 22:08:03.115644  193660 kubeadm.go:586] duration metric: took 6.152498488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:08:03.115662  193660 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:08:03.118709  193660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:08:03.118782  193660 node_conditions.go:123] node cpu capacity is 2
	I1013 22:08:03.118809  193660 node_conditions.go:105] duration metric: took 3.140409ms to run NodePressure ...
	I1013 22:08:03.118834  193660 start.go:241] waiting for startup goroutines ...
	I1013 22:08:03.118876  193660 start.go:246] waiting for cluster config update ...
	I1013 22:08:03.118900  193660 start.go:255] writing updated cluster config ...
	I1013 22:08:03.119222  193660 ssh_runner.go:195] Run: rm -f paused
	I1013 22:08:03.123150  193660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:08:03.127063  193660 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7vlmn" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:08:02.700623  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:04.700961  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:05.133531  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:07.634673  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:07.200654  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:09.701230  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:10.134115  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:12.634248  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:11.702884  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:14.200290  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:15.132728  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:17.633128  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:16.700269  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	W1013 22:08:18.700830  189874 node_ready.go:57] node "embed-certs-251758" has "Ready":"False" status (will retry)
	I1013 22:08:20.700097  189874 node_ready.go:49] node "embed-certs-251758" is "Ready"
	I1013 22:08:20.700124  189874 node_ready.go:38] duration metric: took 41.002899656s for node "embed-certs-251758" to be "Ready" ...
	I1013 22:08:20.700145  189874 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:08:20.700207  189874 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:08:20.717582  189874 api_server.go:72] duration metric: took 41.661104634s to wait for apiserver process to appear ...
	I1013 22:08:20.717603  189874 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:08:20.717621  189874 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:08:20.725926  189874 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:08:20.727125  189874 api_server.go:141] control plane version: v1.34.1
	I1013 22:08:20.727148  189874 api_server.go:131] duration metric: took 9.53823ms to wait for apiserver health ...
	I1013 22:08:20.727157  189874 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:08:20.730223  189874 system_pods.go:59] 8 kube-system pods found
	I1013 22:08:20.730256  189874 system_pods.go:61] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:08:20.730262  189874 system_pods.go:61] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running
	I1013 22:08:20.730268  189874 system_pods.go:61] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:08:20.730273  189874 system_pods.go:61] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running
	I1013 22:08:20.730278  189874 system_pods.go:61] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running
	I1013 22:08:20.730283  189874 system_pods.go:61] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:08:20.730287  189874 system_pods.go:61] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running
	I1013 22:08:20.730299  189874 system_pods.go:61] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:08:20.730308  189874 system_pods.go:74] duration metric: took 3.14484ms to wait for pod list to return data ...
	I1013 22:08:20.730331  189874 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:08:20.732612  189874 default_sa.go:45] found service account: "default"
	I1013 22:08:20.732636  189874 default_sa.go:55] duration metric: took 2.298051ms for default service account to be created ...
	I1013 22:08:20.732645  189874 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:08:20.735530  189874 system_pods.go:86] 8 kube-system pods found
	I1013 22:08:20.735561  189874 system_pods.go:89] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:08:20.735568  189874 system_pods.go:89] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running
	I1013 22:08:20.735574  189874 system_pods.go:89] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:08:20.735578  189874 system_pods.go:89] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running
	I1013 22:08:20.735583  189874 system_pods.go:89] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running
	I1013 22:08:20.735588  189874 system_pods.go:89] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:08:20.735592  189874 system_pods.go:89] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running
	I1013 22:08:20.735600  189874 system_pods.go:89] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:08:20.735625  189874 retry.go:31] will retry after 240.755142ms: missing components: kube-dns
	I1013 22:08:20.981105  189874 system_pods.go:86] 8 kube-system pods found
	I1013 22:08:20.981136  189874 system_pods.go:89] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:08:20.981143  189874 system_pods.go:89] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running
	I1013 22:08:20.981149  189874 system_pods.go:89] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:08:20.981153  189874 system_pods.go:89] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running
	I1013 22:08:20.981158  189874 system_pods.go:89] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running
	I1013 22:08:20.981161  189874 system_pods.go:89] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:08:20.981165  189874 system_pods.go:89] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running
	I1013 22:08:20.981171  189874 system_pods.go:89] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:08:20.981186  189874 retry.go:31] will retry after 338.78473ms: missing components: kube-dns
	I1013 22:08:21.323930  189874 system_pods.go:86] 8 kube-system pods found
	I1013 22:08:21.324028  189874 system_pods.go:89] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:08:21.324040  189874 system_pods.go:89] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running
	I1013 22:08:21.324048  189874 system_pods.go:89] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:08:21.324053  189874 system_pods.go:89] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running
	I1013 22:08:21.324058  189874 system_pods.go:89] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running
	I1013 22:08:21.324062  189874 system_pods.go:89] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:08:21.324066  189874 system_pods.go:89] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running
	I1013 22:08:21.324071  189874 system_pods.go:89] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Running
	I1013 22:08:21.324100  189874 retry.go:31] will retry after 447.15634ms: missing components: kube-dns
	I1013 22:08:21.774705  189874 system_pods.go:86] 8 kube-system pods found
	I1013 22:08:21.774739  189874 system_pods.go:89] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:08:21.774747  189874 system_pods.go:89] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running
	I1013 22:08:21.774754  189874 system_pods.go:89] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:08:21.774758  189874 system_pods.go:89] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running
	I1013 22:08:21.774763  189874 system_pods.go:89] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running
	I1013 22:08:21.774768  189874 system_pods.go:89] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:08:21.774772  189874 system_pods.go:89] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running
	I1013 22:08:21.774776  189874 system_pods.go:89] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Running
	I1013 22:08:21.774790  189874 retry.go:31] will retry after 371.448243ms: missing components: kube-dns
	I1013 22:08:22.157082  189874 system_pods.go:86] 8 kube-system pods found
	I1013 22:08:22.157116  189874 system_pods.go:89] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Running
	I1013 22:08:22.157124  189874 system_pods.go:89] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running
	I1013 22:08:22.157129  189874 system_pods.go:89] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:08:22.157133  189874 system_pods.go:89] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running
	I1013 22:08:22.157137  189874 system_pods.go:89] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running
	I1013 22:08:22.157142  189874 system_pods.go:89] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:08:22.157147  189874 system_pods.go:89] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running
	I1013 22:08:22.157151  189874 system_pods.go:89] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Running
	I1013 22:08:22.157158  189874 system_pods.go:126] duration metric: took 1.424508286s to wait for k8s-apps to be running ...
	I1013 22:08:22.157169  189874 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:08:22.157226  189874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:08:22.175287  189874 system_svc.go:56] duration metric: took 18.109468ms WaitForService to wait for kubelet
	I1013 22:08:22.175318  189874 kubeadm.go:586] duration metric: took 43.118844255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:08:22.175336  189874 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:08:22.178163  189874 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:08:22.178199  189874 node_conditions.go:123] node cpu capacity is 2
	I1013 22:08:22.178212  189874 node_conditions.go:105] duration metric: took 2.871132ms to run NodePressure ...
	I1013 22:08:22.178227  189874 start.go:241] waiting for startup goroutines ...
	I1013 22:08:22.178234  189874 start.go:246] waiting for cluster config update ...
	I1013 22:08:22.178245  189874 start.go:255] writing updated cluster config ...
	I1013 22:08:22.178535  189874 ssh_runner.go:195] Run: rm -f paused
	I1013 22:08:22.182023  189874 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:08:22.185777  189874 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkbv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.190147  189874 pod_ready.go:94] pod "coredns-66bc5c9577-gkbv8" is "Ready"
	I1013 22:08:22.190172  189874 pod_ready.go:86] duration metric: took 4.373649ms for pod "coredns-66bc5c9577-gkbv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.192495  189874 pod_ready.go:83] waiting for pod "etcd-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.196491  189874 pod_ready.go:94] pod "etcd-embed-certs-251758" is "Ready"
	I1013 22:08:22.196517  189874 pod_ready.go:86] duration metric: took 3.994353ms for pod "etcd-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.198773  189874 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.203081  189874 pod_ready.go:94] pod "kube-apiserver-embed-certs-251758" is "Ready"
	I1013 22:08:22.203115  189874 pod_ready.go:86] duration metric: took 4.315895ms for pod "kube-apiserver-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.205527  189874 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.587372  189874 pod_ready.go:94] pod "kube-controller-manager-embed-certs-251758" is "Ready"
	I1013 22:08:22.587405  189874 pod_ready.go:86] duration metric: took 381.854703ms for pod "kube-controller-manager-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:22.787170  189874 pod_ready.go:83] waiting for pod "kube-proxy-nmmdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:23.186662  189874 pod_ready.go:94] pod "kube-proxy-nmmdh" is "Ready"
	I1013 22:08:23.186693  189874 pod_ready.go:86] duration metric: took 399.496539ms for pod "kube-proxy-nmmdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:23.387441  189874 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:23.786224  189874 pod_ready.go:94] pod "kube-scheduler-embed-certs-251758" is "Ready"
	I1013 22:08:23.786253  189874 pod_ready.go:86] duration metric: took 398.784065ms for pod "kube-scheduler-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:08:23.786265  189874 pod_ready.go:40] duration metric: took 1.604213374s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:08:23.846344  189874 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:08:23.851387  189874 out.go:179] * Done! kubectl is now configured to use "embed-certs-251758" cluster and "default" namespace by default
	W1013 22:08:20.132389  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:22.141318  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:24.633150  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	W1013 22:08:27.132986  193660 pod_ready.go:104] pod "coredns-66bc5c9577-7vlmn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 13 22:08:21 embed-certs-251758 crio[839]: time="2025-10-13T22:08:21.080170887Z" level=info msg="Created container d1a8f1ee4278e883e7574bf5cc43b9d8449f8d1e7ff9587c24f8321cb8fa2b07: kube-system/coredns-66bc5c9577-gkbv8/coredns" id=dc541ec1-396d-4a35-879e-9c361eae1997 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:08:21 embed-certs-251758 crio[839]: time="2025-10-13T22:08:21.080689159Z" level=info msg="Starting container: d1a8f1ee4278e883e7574bf5cc43b9d8449f8d1e7ff9587c24f8321cb8fa2b07" id=e9715892-27d7-4f19-b9f0-6c792f71ae64 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:08:21 embed-certs-251758 crio[839]: time="2025-10-13T22:08:21.083731347Z" level=info msg="Started container" PID=1743 containerID=d1a8f1ee4278e883e7574bf5cc43b9d8449f8d1e7ff9587c24f8321cb8fa2b07 description=kube-system/coredns-66bc5c9577-gkbv8/coredns id=e9715892-27d7-4f19-b9f0-6c792f71ae64 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cbeb97937e9aed1e135de1536bc470e68345d3c2c415353f835760a5e8d312aa
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.368909925Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b582ba85-a744-489a-b594-026e31aa4010 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.36898121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.37839575Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5a194b72c64785482acb23d7ffc9deaa71501e86b9e40738c70f1fcaca94eb86 UID:e59e21ac-ac32-43ef-aebf-149407845f99 NetNS:/var/run/netns/23795a70-3be5-4db4-aeae-06ee863092dc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791a0}] Aliases:map[]}"
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.378445258Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.388099456Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5a194b72c64785482acb23d7ffc9deaa71501e86b9e40738c70f1fcaca94eb86 UID:e59e21ac-ac32-43ef-aebf-149407845f99 NetNS:/var/run/netns/23795a70-3be5-4db4-aeae-06ee863092dc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791a0}] Aliases:map[]}"
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.388402382Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.391404489Z" level=info msg="Ran pod sandbox 5a194b72c64785482acb23d7ffc9deaa71501e86b9e40738c70f1fcaca94eb86 with infra container: default/busybox/POD" id=b582ba85-a744-489a-b594-026e31aa4010 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.393544127Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=48ddb55c-0827-440b-b758-b2e14296e47c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.393789955Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=48ddb55c-0827-440b-b758-b2e14296e47c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.393909533Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=48ddb55c-0827-440b-b758-b2e14296e47c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.396416826Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a32ca719-4e87-415c-bcc0-7b04ce84913e name=/runtime.v1.ImageService/PullImage
	Oct 13 22:08:24 embed-certs-251758 crio[839]: time="2025-10-13T22:08:24.398014792Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.340131436Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a32ca719-4e87-415c-bcc0-7b04ce84913e name=/runtime.v1.ImageService/PullImage
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.340933385Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1d608d00-b3bb-41e8-a69d-32f05b95d2a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.34270445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=688477a6-c151-4beb-a9e0-462016b33cbc name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.348561139Z" level=info msg="Creating container: default/busybox/busybox" id=4fdcf87e-10b7-4ded-814d-e629ab3d74c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.349351273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.354237509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.354683323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.36983234Z" level=info msg="Created container 68c6309ed7c5cc65332a3c1c704eb7f4c4c1937429c8e767e2c798ab3efc9ef9: default/busybox/busybox" id=4fdcf87e-10b7-4ded-814d-e629ab3d74c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.370832463Z" level=info msg="Starting container: 68c6309ed7c5cc65332a3c1c704eb7f4c4c1937429c8e767e2c798ab3efc9ef9" id=512e41a6-659e-400d-a663-0e44eac5d511 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:08:26 embed-certs-251758 crio[839]: time="2025-10-13T22:08:26.372543697Z" level=info msg="Started container" PID=1797 containerID=68c6309ed7c5cc65332a3c1c704eb7f4c4c1937429c8e767e2c798ab3efc9ef9 description=default/busybox/busybox id=512e41a6-659e-400d-a663-0e44eac5d511 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a194b72c64785482acb23d7ffc9deaa71501e86b9e40738c70f1fcaca94eb86
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	68c6309ed7c5c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   5a194b72c6478       busybox                                      default
	d1a8f1ee4278e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   cbeb97937e9ae       coredns-66bc5c9577-gkbv8                     kube-system
	5ca1e52751c49       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   72b8ee014a799       storage-provisioner                          kube-system
	87e82c05d95fe       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   41ba0e578a830       kube-proxy-nmmdh                             kube-system
	bfac8fac6264b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   2f5f020b96864       kindnet-csh4p                                kube-system
	2615a164f9b4d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   be9f8b98b15f1       kube-scheduler-embed-certs-251758            kube-system
	be2a8446ed111       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   3dc571e77d524       kube-controller-manager-embed-certs-251758   kube-system
	203a411ba45b8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   63e08687280ae       kube-apiserver-embed-certs-251758            kube-system
	7685ec85807ab       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   7893df1a2f4e5       etcd-embed-certs-251758                      kube-system
	
	
	==> coredns [d1a8f1ee4278e883e7574bf5cc43b9d8449f8d1e7ff9587c24f8321cb8fa2b07] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59210 - 60812 "HINFO IN 1967839905496477120.92901575877797602. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.013802827s
	
	
	==> describe nodes <==
	Name:               embed-certs-251758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-251758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=embed-certs-251758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_07_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:07:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-251758
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:08:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:08:25 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:08:25 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:08:25 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:08:25 +0000   Mon, 13 Oct 2025 22:08:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-251758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b08b0a4a9b34a2c883f8176a95bf4f0
	  System UUID:                f24253cd-26e9-4717-a721-e240cb5f208d
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gkbv8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-251758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-csh4p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-251758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-251758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-nmmdh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-251758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 68s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x7 over 68s)  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-251758 event: Registered Node embed-certs-251758 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-251758 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7685ec85807ab31eeef5b49710f02838cd27acb9e5f2ab440f323f7ebd8a5677] <==
	{"level":"warn","ts":"2025-10-13T22:07:29.457186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.471313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.490892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.514031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.531924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.572045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.580587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.607330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.617947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.635998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.651200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.675502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.684746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.701513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.718771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.735381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.752680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.770594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.787346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.803967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.833668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.855477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.881609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:29.898279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:30.037536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:08:35 up  1:50,  0 user,  load average: 2.86, 2.67, 2.17
	Linux embed-certs-251758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bfac8fac6264be067dc744168a269a53a070065a1951bf6b6318f846154745bf] <==
	I1013 22:07:40.021693       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:07:40.021956       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:07:40.022084       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:07:40.022097       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:07:40.022109       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:07:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:07:40.305012       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:07:40.305051       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:07:40.305062       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:07:40.305164       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:08:10.304998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:08:10.305115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:08:10.305251       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 22:08:10.305350       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1013 22:08:11.807623       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:08:11.807657       1 metrics.go:72] Registering metrics
	I1013 22:08:11.807708       1 controller.go:711] "Syncing nftables rules"
	I1013 22:08:20.307370       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:08:20.307408       1 main.go:301] handling current node
	I1013 22:08:30.306874       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:08:30.306909       1 main.go:301] handling current node
	
	
	==> kube-apiserver [203a411ba45b8af5fcddb4b7840e12368dc0d7eab08df04993597286c262afbb] <==
	I1013 22:07:30.996624       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:07:31.000596       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:07:31.002107       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:07:31.002809       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:07:31.015015       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:07:31.002835       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 22:07:31.049676       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:07:31.751026       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:07:31.758769       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:07:31.758917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:07:32.461477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:07:32.528941       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:07:32.662312       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:07:32.669289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1013 22:07:32.670381       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:07:32.675643       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:07:32.828072       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:07:33.898419       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:07:33.925155       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:07:33.938080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:07:38.335317       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:07:38.341562       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:07:38.550382       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 22:07:38.882864       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1013 22:08:33.219928       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:60248: use of closed network connection
	
	
	==> kube-controller-manager [be2a8446ed111f57dd0414f4983fd5cf6a7c76e3447cdee9cb2c0fbe198a2aa3] <==
	I1013 22:07:37.832820       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:07:37.833893       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:07:37.833950       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1013 22:07:37.835033       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1013 22:07:37.835071       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:07:37.837016       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 22:07:37.840261       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:07:37.841478       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:07:37.841490       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:07:37.848719       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:07:37.849775       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:07:37.850972       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:07:37.875008       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:07:37.877252       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:07:37.877322       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:07:37.877670       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:07:37.877815       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 22:07:37.877848       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:07:37.878851       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:07:37.880726       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:07:37.898079       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:07:37.906261       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:07:37.906341       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:07:37.906358       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:08:22.835212       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [87e82c05d95fea1ace955c030bd57e4f68a7cdba371c2cdc2e89eefc7b7ee70c] <==
	I1013 22:07:40.504404       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:07:40.597209       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:07:40.698136       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:07:40.698175       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 22:07:40.698268       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:07:40.717195       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:07:40.717267       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:07:40.721171       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:07:40.721494       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:07:40.721517       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:07:40.724618       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:07:40.724686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:07:40.726328       1 config.go:200] "Starting service config controller"
	I1013 22:07:40.726384       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:07:40.726720       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:07:40.726765       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:07:40.726736       1 config.go:309] "Starting node config controller"
	I1013 22:07:40.729207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:07:40.729238       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:07:40.825091       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:07:40.826742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:07:40.826826       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2615a164f9b4d15af3a43f64ae2e0834f81f20f7a3719a3d547fb2fd6fc7e1fb] <==
	I1013 22:07:31.616955       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:07:31.618960       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:07:31.618999       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:07:31.619427       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:07:31.619535       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 22:07:31.620462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 22:07:31.629396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:07:31.629514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:07:31.629562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:07:31.629609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:07:31.629667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:07:31.629720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 22:07:31.629782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:07:31.629830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:07:31.629942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:07:31.629993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:07:31.630028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:07:31.630094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:07:31.630132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:07:31.630430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:07:31.630574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:07:31.630662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:07:31.630692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:07:31.630724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1013 22:07:33.119419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: E1013 22:07:38.622603    1319 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-251758\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-251758' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709835    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7726987e-433d-4e17-9b95-7c1d46d6a2e3-kube-proxy\") pod \"kube-proxy-nmmdh\" (UID: \"7726987e-433d-4e17-9b95-7c1d46d6a2e3\") " pod="kube-system/kube-proxy-nmmdh"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709880    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7726987e-433d-4e17-9b95-7c1d46d6a2e3-lib-modules\") pod \"kube-proxy-nmmdh\" (UID: \"7726987e-433d-4e17-9b95-7c1d46d6a2e3\") " pod="kube-system/kube-proxy-nmmdh"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709900    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7-cni-cfg\") pod \"kindnet-csh4p\" (UID: \"e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7\") " pod="kube-system/kindnet-csh4p"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709916    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7-xtables-lock\") pod \"kindnet-csh4p\" (UID: \"e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7\") " pod="kube-system/kindnet-csh4p"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709938    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnv5\" (UniqueName: \"kubernetes.io/projected/7726987e-433d-4e17-9b95-7c1d46d6a2e3-kube-api-access-lgnv5\") pod \"kube-proxy-nmmdh\" (UID: \"7726987e-433d-4e17-9b95-7c1d46d6a2e3\") " pod="kube-system/kube-proxy-nmmdh"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709955    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7-lib-modules\") pod \"kindnet-csh4p\" (UID: \"e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7\") " pod="kube-system/kindnet-csh4p"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709973    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7726987e-433d-4e17-9b95-7c1d46d6a2e3-xtables-lock\") pod \"kube-proxy-nmmdh\" (UID: \"7726987e-433d-4e17-9b95-7c1d46d6a2e3\") " pod="kube-system/kube-proxy-nmmdh"
	Oct 13 22:07:38 embed-certs-251758 kubelet[1319]: I1013 22:07:38.709990    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2f7r\" (UniqueName: \"kubernetes.io/projected/e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7-kube-api-access-k2f7r\") pod \"kindnet-csh4p\" (UID: \"e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7\") " pod="kube-system/kindnet-csh4p"
	Oct 13 22:07:39 embed-certs-251758 kubelet[1319]: I1013 22:07:39.751541    1319 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 22:07:39 embed-certs-251758 kubelet[1319]: E1013 22:07:39.812074    1319 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 13 22:07:39 embed-certs-251758 kubelet[1319]: E1013 22:07:39.812356    1319 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7726987e-433d-4e17-9b95-7c1d46d6a2e3-kube-proxy podName:7726987e-433d-4e17-9b95-7c1d46d6a2e3 nodeName:}" failed. No retries permitted until 2025-10-13 22:07:40.3123291 +0000 UTC m=+6.570209589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/7726987e-433d-4e17-9b95-7c1d46d6a2e3-kube-proxy") pod "kube-proxy-nmmdh" (UID: "7726987e-433d-4e17-9b95-7c1d46d6a2e3") : failed to sync configmap cache: timed out waiting for the condition
	Oct 13 22:07:39 embed-certs-251758 kubelet[1319]: W1013 22:07:39.868095    1319 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/crio-2f5f020b968642c7ed6bf6ad767562817c4091ccd01f1b428b67eab7732c3e24 WatchSource:0}: Error finding container 2f5f020b968642c7ed6bf6ad767562817c4091ccd01f1b428b67eab7732c3e24: Status 404 returned error can't find the container with id 2f5f020b968642c7ed6bf6ad767562817c4091ccd01f1b428b67eab7732c3e24
	Oct 13 22:07:40 embed-certs-251758 kubelet[1319]: W1013 22:07:40.427938    1319 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/crio-41ba0e578a830d601755083e42bc7725e958bbb65a0205aca4082d6349bf4598 WatchSource:0}: Error finding container 41ba0e578a830d601755083e42bc7725e958bbb65a0205aca4082d6349bf4598: Status 404 returned error can't find the container with id 41ba0e578a830d601755083e42bc7725e958bbb65a0205aca4082d6349bf4598
	Oct 13 22:07:40 embed-certs-251758 kubelet[1319]: I1013 22:07:40.990236    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-csh4p" podStartSLOduration=2.990217173 podStartE2EDuration="2.990217173s" podCreationTimestamp="2025-10-13 22:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:07:39.987595547 +0000 UTC m=+6.245476044" watchObservedRunningTime="2025-10-13 22:07:40.990217173 +0000 UTC m=+7.248097662"
	Oct 13 22:07:42 embed-certs-251758 kubelet[1319]: I1013 22:07:42.569038    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nmmdh" podStartSLOduration=4.569019433 podStartE2EDuration="4.569019433s" podCreationTimestamp="2025-10-13 22:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:07:40.991097127 +0000 UTC m=+7.248977615" watchObservedRunningTime="2025-10-13 22:07:42.569019433 +0000 UTC m=+8.826899921"
	Oct 13 22:08:20 embed-certs-251758 kubelet[1319]: I1013 22:08:20.629271    1319 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 22:08:20 embed-certs-251758 kubelet[1319]: I1013 22:08:20.726029    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aadbfae4-4ea2-4d6b-be6d-ac97012be757-tmp\") pod \"storage-provisioner\" (UID: \"aadbfae4-4ea2-4d6b-be6d-ac97012be757\") " pod="kube-system/storage-provisioner"
	Oct 13 22:08:20 embed-certs-251758 kubelet[1319]: I1013 22:08:20.726104    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48g8\" (UniqueName: \"kubernetes.io/projected/aadbfae4-4ea2-4d6b-be6d-ac97012be757-kube-api-access-k48g8\") pod \"storage-provisioner\" (UID: \"aadbfae4-4ea2-4d6b-be6d-ac97012be757\") " pod="kube-system/storage-provisioner"
	Oct 13 22:08:20 embed-certs-251758 kubelet[1319]: I1013 22:08:20.726147    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7b4689-bcb1-4a31-84a2-726b234eceb7-config-volume\") pod \"coredns-66bc5c9577-gkbv8\" (UID: \"ae7b4689-bcb1-4a31-84a2-726b234eceb7\") " pod="kube-system/coredns-66bc5c9577-gkbv8"
	Oct 13 22:08:20 embed-certs-251758 kubelet[1319]: I1013 22:08:20.726169    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv9w7\" (UniqueName: \"kubernetes.io/projected/ae7b4689-bcb1-4a31-84a2-726b234eceb7-kube-api-access-pv9w7\") pod \"coredns-66bc5c9577-gkbv8\" (UID: \"ae7b4689-bcb1-4a31-84a2-726b234eceb7\") " pod="kube-system/coredns-66bc5c9577-gkbv8"
	Oct 13 22:08:21 embed-certs-251758 kubelet[1319]: W1013 22:08:21.031264    1319 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/crio-cbeb97937e9aed1e135de1536bc470e68345d3c2c415353f835760a5e8d312aa WatchSource:0}: Error finding container cbeb97937e9aed1e135de1536bc470e68345d3c2c415353f835760a5e8d312aa: Status 404 returned error can't find the container with id cbeb97937e9aed1e135de1536bc470e68345d3c2c415353f835760a5e8d312aa
	Oct 13 22:08:22 embed-certs-251758 kubelet[1319]: I1013 22:08:22.098964    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.098946646 podStartE2EDuration="42.098946646s" podCreationTimestamp="2025-10-13 22:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:08:21.105439231 +0000 UTC m=+47.363319744" watchObservedRunningTime="2025-10-13 22:08:22.098946646 +0000 UTC m=+48.356827143"
	Oct 13 22:08:22 embed-certs-251758 kubelet[1319]: I1013 22:08:22.119371    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gkbv8" podStartSLOduration=43.119351482 podStartE2EDuration="43.119351482s" podCreationTimestamp="2025-10-13 22:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:08:22.099771462 +0000 UTC m=+48.357651951" watchObservedRunningTime="2025-10-13 22:08:22.119351482 +0000 UTC m=+48.377231979"
	Oct 13 22:08:24 embed-certs-251758 kubelet[1319]: I1013 22:08:24.153914    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsp5m\" (UniqueName: \"kubernetes.io/projected/e59e21ac-ac32-43ef-aebf-149407845f99-kube-api-access-jsp5m\") pod \"busybox\" (UID: \"e59e21ac-ac32-43ef-aebf-149407845f99\") " pod="default/busybox"
	
	
	==> storage-provisioner [5ca1e52751c4936f2506a72a2a199cab8a396bc0ad213dc94cc17495f93bbe8c] <==
	I1013 22:08:21.078323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:08:21.105842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:08:21.105972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:08:21.108370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:21.119660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:08:21.119931       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:08:21.120120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-251758_7bd4f691-3853-4ef9-9faf-402580dee4bf!
	I1013 22:08:21.133349       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56f6d001-d20d-4495-ba7d-2f8ddd8e7ade", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-251758_7bd4f691-3853-4ef9-9faf-402580dee4bf became leader
	W1013 22:08:21.134360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:21.138698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:08:21.224493       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-251758_7bd4f691-3853-4ef9-9faf-402580dee4bf!
	W1013 22:08:23.142296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:23.147503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:25.150442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:25.160337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:27.164442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:27.171514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:29.174288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:29.179085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:31.182183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:31.186472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:33.189052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:33.199342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:35.202817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:35.217689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-251758 -n embed-certs-251758
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-251758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-998398 --alsologtostderr -v=1
E1013 22:08:51.248959    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-998398 --alsologtostderr -v=1: exit status 80 (2.614989089s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-998398 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:08:50.848419  197282 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:08:50.848536  197282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:50.848542  197282 out.go:374] Setting ErrFile to fd 2...
	I1013 22:08:50.848547  197282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:50.848814  197282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:08:50.849045  197282 out.go:368] Setting JSON to false
	I1013 22:08:50.849059  197282 mustload.go:65] Loading cluster: no-preload-998398
	I1013 22:08:50.849526  197282 config.go:182] Loaded profile config "no-preload-998398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:08:50.849989  197282 cli_runner.go:164] Run: docker container inspect no-preload-998398 --format={{.State.Status}}
	I1013 22:08:50.871333  197282 host.go:66] Checking if "no-preload-998398" exists ...
	I1013 22:08:50.871703  197282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:08:50.954692  197282 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-13 22:08:50.945031702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:08:50.955361  197282 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-998398 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:08:50.958777  197282 out.go:179] * Pausing node no-preload-998398 ... 
	I1013 22:08:50.961767  197282 host.go:66] Checking if "no-preload-998398" exists ...
	I1013 22:08:50.962107  197282 ssh_runner.go:195] Run: systemctl --version
	I1013 22:08:50.962158  197282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-998398
	I1013 22:08:50.978691  197282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/no-preload-998398/id_rsa Username:docker}
	I1013 22:08:51.087460  197282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:08:51.113480  197282 pause.go:52] kubelet running: true
	I1013 22:08:51.113604  197282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:08:51.378997  197282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:08:51.379088  197282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:08:51.443979  197282 cri.go:89] found id: "9f698d98db98d458fdf34808d5107a22ca7839d0ce01eebe0aaa5a78d3fb01b8"
	I1013 22:08:51.444005  197282 cri.go:89] found id: "2a55b6efd9bc13cf8923129e73cbfab2aab29f14014b0c88bcb79a3eba86c968"
	I1013 22:08:51.444016  197282 cri.go:89] found id: "f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa"
	I1013 22:08:51.444020  197282 cri.go:89] found id: "4ead3ee86a4c785f3394083046b2503b4c6061e4dffda34cd0695c23f44a70f5"
	I1013 22:08:51.444024  197282 cri.go:89] found id: "0fee995561ea043589c7e89d1f5694903510a9b1118ae89adb0ce92a9b49ac46"
	I1013 22:08:51.444027  197282 cri.go:89] found id: "2346fd5f183a812bd5bdb156c3e135f978ddbc4289db62f027930721e4ad02ce"
	I1013 22:08:51.444031  197282 cri.go:89] found id: "8b465dfa7766b973d11779f1b004b6b9862a3752d706b497e8911ef92d698e5d"
	I1013 22:08:51.444053  197282 cri.go:89] found id: "fef06fef22a944406c398dc34d304bcc991484835183ae13edd49c795fa70c38"
	I1013 22:08:51.444063  197282 cri.go:89] found id: "6d4f60f057762a629c080a53d21fd933695b93f082a2b7fd989f3d4229ac75c3"
	I1013 22:08:51.444070  197282 cri.go:89] found id: "04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	I1013 22:08:51.444074  197282 cri.go:89] found id: "5428038f5a6a252f8fdc19fa45aa1ccf971d6760c8dcb30a76f4aa89407b895d"
	I1013 22:08:51.444077  197282 cri.go:89] found id: ""
	I1013 22:08:51.444142  197282 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:08:51.454747  197282 retry.go:31] will retry after 331.174692ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:08:51Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:08:51.786229  197282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:08:51.802198  197282 pause.go:52] kubelet running: false
	I1013 22:08:51.802274  197282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:08:52.017208  197282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:08:52.017301  197282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:08:52.112946  197282 cri.go:89] found id: "9f698d98db98d458fdf34808d5107a22ca7839d0ce01eebe0aaa5a78d3fb01b8"
	I1013 22:08:52.112967  197282 cri.go:89] found id: "2a55b6efd9bc13cf8923129e73cbfab2aab29f14014b0c88bcb79a3eba86c968"
	I1013 22:08:52.112971  197282 cri.go:89] found id: "f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa"
	I1013 22:08:52.112975  197282 cri.go:89] found id: "4ead3ee86a4c785f3394083046b2503b4c6061e4dffda34cd0695c23f44a70f5"
	I1013 22:08:52.112978  197282 cri.go:89] found id: "0fee995561ea043589c7e89d1f5694903510a9b1118ae89adb0ce92a9b49ac46"
	I1013 22:08:52.112982  197282 cri.go:89] found id: "2346fd5f183a812bd5bdb156c3e135f978ddbc4289db62f027930721e4ad02ce"
	I1013 22:08:52.112984  197282 cri.go:89] found id: "8b465dfa7766b973d11779f1b004b6b9862a3752d706b497e8911ef92d698e5d"
	I1013 22:08:52.112987  197282 cri.go:89] found id: "fef06fef22a944406c398dc34d304bcc991484835183ae13edd49c795fa70c38"
	I1013 22:08:52.112990  197282 cri.go:89] found id: "6d4f60f057762a629c080a53d21fd933695b93f082a2b7fd989f3d4229ac75c3"
	I1013 22:08:52.112996  197282 cri.go:89] found id: "04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	I1013 22:08:52.112998  197282 cri.go:89] found id: "5428038f5a6a252f8fdc19fa45aa1ccf971d6760c8dcb30a76f4aa89407b895d"
	I1013 22:08:52.113001  197282 cri.go:89] found id: ""
	I1013 22:08:52.113049  197282 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:08:52.124746  197282 retry.go:31] will retry after 302.11631ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:08:52Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:08:52.427069  197282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:08:52.440749  197282 pause.go:52] kubelet running: false
	I1013 22:08:52.440812  197282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:08:52.667095  197282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:08:52.667183  197282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:08:52.738105  197282 cri.go:89] found id: "9f698d98db98d458fdf34808d5107a22ca7839d0ce01eebe0aaa5a78d3fb01b8"
	I1013 22:08:52.738128  197282 cri.go:89] found id: "2a55b6efd9bc13cf8923129e73cbfab2aab29f14014b0c88bcb79a3eba86c968"
	I1013 22:08:52.738133  197282 cri.go:89] found id: "f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa"
	I1013 22:08:52.738137  197282 cri.go:89] found id: "4ead3ee86a4c785f3394083046b2503b4c6061e4dffda34cd0695c23f44a70f5"
	I1013 22:08:52.738140  197282 cri.go:89] found id: "0fee995561ea043589c7e89d1f5694903510a9b1118ae89adb0ce92a9b49ac46"
	I1013 22:08:52.738155  197282 cri.go:89] found id: "2346fd5f183a812bd5bdb156c3e135f978ddbc4289db62f027930721e4ad02ce"
	I1013 22:08:52.738160  197282 cri.go:89] found id: "8b465dfa7766b973d11779f1b004b6b9862a3752d706b497e8911ef92d698e5d"
	I1013 22:08:52.738163  197282 cri.go:89] found id: "fef06fef22a944406c398dc34d304bcc991484835183ae13edd49c795fa70c38"
	I1013 22:08:52.738167  197282 cri.go:89] found id: "6d4f60f057762a629c080a53d21fd933695b93f082a2b7fd989f3d4229ac75c3"
	I1013 22:08:52.738176  197282 cri.go:89] found id: "04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	I1013 22:08:52.738180  197282 cri.go:89] found id: "5428038f5a6a252f8fdc19fa45aa1ccf971d6760c8dcb30a76f4aa89407b895d"
	I1013 22:08:52.738185  197282 cri.go:89] found id: ""
	I1013 22:08:52.738232  197282 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:08:52.752991  197282 retry.go:31] will retry after 301.99011ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:08:52Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:08:53.055427  197282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:08:53.076073  197282 pause.go:52] kubelet running: false
	I1013 22:08:53.076140  197282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:08:53.290840  197282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:08:53.290931  197282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:08:53.378163  197282 cri.go:89] found id: "9f698d98db98d458fdf34808d5107a22ca7839d0ce01eebe0aaa5a78d3fb01b8"
	I1013 22:08:53.378182  197282 cri.go:89] found id: "2a55b6efd9bc13cf8923129e73cbfab2aab29f14014b0c88bcb79a3eba86c968"
	I1013 22:08:53.378187  197282 cri.go:89] found id: "f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa"
	I1013 22:08:53.378191  197282 cri.go:89] found id: "4ead3ee86a4c785f3394083046b2503b4c6061e4dffda34cd0695c23f44a70f5"
	I1013 22:08:53.378194  197282 cri.go:89] found id: "0fee995561ea043589c7e89d1f5694903510a9b1118ae89adb0ce92a9b49ac46"
	I1013 22:08:53.378199  197282 cri.go:89] found id: "2346fd5f183a812bd5bdb156c3e135f978ddbc4289db62f027930721e4ad02ce"
	I1013 22:08:53.378202  197282 cri.go:89] found id: "8b465dfa7766b973d11779f1b004b6b9862a3752d706b497e8911ef92d698e5d"
	I1013 22:08:53.378205  197282 cri.go:89] found id: "fef06fef22a944406c398dc34d304bcc991484835183ae13edd49c795fa70c38"
	I1013 22:08:53.378208  197282 cri.go:89] found id: "6d4f60f057762a629c080a53d21fd933695b93f082a2b7fd989f3d4229ac75c3"
	I1013 22:08:53.378214  197282 cri.go:89] found id: "04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	I1013 22:08:53.378217  197282 cri.go:89] found id: "5428038f5a6a252f8fdc19fa45aa1ccf971d6760c8dcb30a76f4aa89407b895d"
	I1013 22:08:53.378220  197282 cri.go:89] found id: ""
	I1013 22:08:53.378267  197282 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:08:53.392814  197282 out.go:203] 
	W1013 22:08:53.395717  197282 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:08:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:08:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:08:53.395735  197282 out.go:285] * 
	* 
	W1013 22:08:53.401706  197282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:08:53.406529  197282 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-998398 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-998398
helpers_test.go:243: (dbg) docker inspect no-preload-998398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c",
	        "Created": "2025-10-13T22:06:10.888076989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:07:49.547990769Z",
	            "FinishedAt": "2025-10-13T22:07:48.734489858Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/hosts",
	        "LogPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c-json.log",
	        "Name": "/no-preload-998398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-998398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-998398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c",
	                "LowerDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/merged",
	                "UpperDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/diff",
	                "WorkDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-998398",
	                "Source": "/var/lib/docker/volumes/no-preload-998398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-998398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-998398",
	                "name.minikube.sigs.k8s.io": "no-preload-998398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63fa82deb7d95770d68e9b31074f389cfb066b2e32a6f1c3a91aad23df2a85d5",
	            "SandboxKey": "/var/run/docker/netns/63fa82deb7d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-998398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:c2:2f:6d:b2:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "833f6629e3a8d48e88017e58115925d444d24da96413e70671b51381906ca938",
	                    "EndpointID": "b98f6e61bfa1d371beaa908b78be753225734cdec4576d0f023305d072af1d82",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-998398",
	                        "6fb16f37ec05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398: exit status 2 (431.568613ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-998398 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-998398 logs -n 25: (1.621542358s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-194931 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194931    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-options-194931                                                                                                                                                                                                                        │ cert-options-194931    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ stop    │ -p old-k8s-version-061725 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-061725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ stop    │ -p no-preload-998398 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ addons  │ enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:08:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:08:48.023530  196707 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:08:48.023692  196707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:48.023703  196707 out.go:374] Setting ErrFile to fd 2...
	I1013 22:08:48.023707  196707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:48.024077  196707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:08:48.024563  196707 out.go:368] Setting JSON to false
	I1013 22:08:48.025620  196707 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6662,"bootTime":1760386666,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:08:48.025699  196707 start.go:141] virtualization:  
	I1013 22:08:48.028839  196707 out.go:179] * [embed-certs-251758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:08:48.032780  196707 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:08:48.032902  196707 notify.go:220] Checking for updates...
	I1013 22:08:48.038915  196707 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:08:48.041918  196707 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:08:48.044978  196707 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:08:48.048207  196707 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:08:48.051334  196707 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:08:48.054858  196707 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:08:48.055479  196707 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:08:48.081472  196707 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:08:48.081585  196707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:08:48.138411  196707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:08:48.12869631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:08:48.138517  196707 docker.go:318] overlay module found
	I1013 22:08:48.141759  196707 out.go:179] * Using the docker driver based on existing profile
	I1013 22:08:48.144535  196707 start.go:305] selected driver: docker
	I1013 22:08:48.144553  196707 start.go:925] validating driver "docker" against &{Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:08:48.144642  196707 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:08:48.145359  196707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:08:48.220421  196707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:08:48.211448493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:08:48.220755  196707 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:08:48.220788  196707 cni.go:84] Creating CNI manager for ""
	I1013 22:08:48.220853  196707 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:08:48.220895  196707 start.go:349] cluster config:
	{Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:08:48.224167  196707 out.go:179] * Starting "embed-certs-251758" primary control-plane node in "embed-certs-251758" cluster
	I1013 22:08:48.227189  196707 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:08:48.230172  196707 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:08:48.233068  196707 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:08:48.233130  196707 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:08:48.233143  196707 cache.go:58] Caching tarball of preloaded images
	I1013 22:08:48.233166  196707 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:08:48.233227  196707 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:08:48.233238  196707 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:08:48.233353  196707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/config.json ...
	I1013 22:08:48.252886  196707 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:08:48.252905  196707 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:08:48.252925  196707 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:08:48.252950  196707 start.go:360] acquireMachinesLock for embed-certs-251758: {Name:mk516ca80db4149cf875ca7692ac1e5faffe2cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:08:48.253008  196707 start.go:364] duration metric: took 35.462µs to acquireMachinesLock for "embed-certs-251758"
	I1013 22:08:48.253027  196707 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:08:48.253038  196707 fix.go:54] fixHost starting: 
	I1013 22:08:48.253375  196707 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:08:48.270370  196707 fix.go:112] recreateIfNeeded on embed-certs-251758: state=Stopped err=<nil>
	W1013 22:08:48.270405  196707 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:08:48.273620  196707 out.go:252] * Restarting existing docker container for "embed-certs-251758" ...
	I1013 22:08:48.273722  196707 cli_runner.go:164] Run: docker start embed-certs-251758
	I1013 22:08:48.544410  196707 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:08:48.570193  196707 kic.go:430] container "embed-certs-251758" state is running.
	I1013 22:08:48.570580  196707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:08:48.599409  196707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/config.json ...
	I1013 22:08:48.599751  196707 machine.go:93] provisionDockerMachine start ...
	I1013 22:08:48.599897  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:48.618708  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:48.619030  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:48.619046  196707 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:08:48.619910  196707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:08:51.767838  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-251758
	
	I1013 22:08:51.767902  196707 ubuntu.go:182] provisioning hostname "embed-certs-251758"
	I1013 22:08:51.767979  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:51.790943  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:51.791252  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:51.791264  196707 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-251758 && echo "embed-certs-251758" | sudo tee /etc/hostname
	I1013 22:08:51.958949  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-251758
	
	I1013 22:08:51.959035  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:51.982423  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:51.982725  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:51.982742  196707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-251758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-251758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-251758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:08:52.147955  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:08:52.147984  196707 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:08:52.148007  196707 ubuntu.go:190] setting up certificates
	I1013 22:08:52.148018  196707 provision.go:84] configureAuth start
	I1013 22:08:52.148075  196707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:08:52.165220  196707 provision.go:143] copyHostCerts
	I1013 22:08:52.165298  196707 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:08:52.165321  196707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:08:52.165399  196707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:08:52.165508  196707 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:08:52.165519  196707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:08:52.165553  196707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:08:52.165625  196707 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:08:52.165634  196707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:08:52.165659  196707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:08:52.165721  196707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.embed-certs-251758 san=[127.0.0.1 192.168.85.2 embed-certs-251758 localhost minikube]
	I1013 22:08:52.322392  196707 provision.go:177] copyRemoteCerts
	I1013 22:08:52.322454  196707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:08:52.322497  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:52.339819  196707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:08:52.449062  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:08:52.474013  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:08:52.501052  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1013 22:08:52.534173  196707 provision.go:87] duration metric: took 386.130688ms to configureAuth
	I1013 22:08:52.534197  196707 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:08:52.534381  196707 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:08:52.534487  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:52.558078  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:52.558379  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:52.558395  196707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:08:52.895005  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:08:52.895029  196707 machine.go:96] duration metric: took 4.295264084s to provisionDockerMachine
	I1013 22:08:52.895040  196707 start.go:293] postStartSetup for "embed-certs-251758" (driver="docker")
	I1013 22:08:52.895051  196707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:08:52.895126  196707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:08:52.895170  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:52.921671  196707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.171259171Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9e9f31d2-6b33-47ea-83d5-1835c85176ec name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.172494685Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e44f458e-d8d1-42ea-ae23-9b534644f7d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.173502052Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper" id=27735105-e541-4fd2-bc71-f9a3070f5e06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.173704615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.180707179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.181331541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.196285165Z" level=info msg="Created container 04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper" id=27735105-e541-4fd2-bc71-f9a3070f5e06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.197288857Z" level=info msg="Starting container: 04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd" id=725806ce-3b4d-4eeb-ae22-965623ecfef0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.200099141Z" level=info msg="Started container" PID=1637 containerID=04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper id=725806ce-3b4d-4eeb-ae22-965623ecfef0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72
	Oct 13 22:08:38 no-preload-998398 conmon[1635]: conmon 04ee584de5d90123435d <ninfo>: container 1637 exited with status 1
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.416123188Z" level=info msg="Removing container: 301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27" id=f84bdc22-97eb-428d-91c8-151a2fc3e15f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.423702607Z" level=info msg="Error loading conmon cgroup of container 301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27: cgroup deleted" id=f84bdc22-97eb-428d-91c8-151a2fc3e15f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.427341268Z" level=info msg="Removed container 301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper" id=f84bdc22-97eb-428d-91c8-151a2fc3e15f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.508934012Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.513095991Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.513130378Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.51315528Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.516367605Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.516399547Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.51642394Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.519397206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.519427548Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.519454231Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.522435177Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.522466905Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	04ee584de5d90       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   01134f09e8878       dashboard-metrics-scraper-6ffb444bf9-2dm9r   kubernetes-dashboard
	9f698d98db98d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   1b4c9d7dda457       storage-provisioner                          kube-system
	5428038f5a6a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago      Running             kubernetes-dashboard        0                   d2d8aa7f42c1b       kubernetes-dashboard-855c9754f9-jplsp        kubernetes-dashboard
	2a55b6efd9bc1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   ab52e218829ee       coredns-66bc5c9577-7vlmn                     kube-system
	2703726f04b00       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   fee21fd98b1af       busybox                                      default
	f9a58a4b4f83d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   1b4c9d7dda457       storage-provisioner                          kube-system
	4ead3ee86a4c7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   b9b36f6d44f15       kindnet-6nvxb                                kube-system
	0fee995561ea0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   b5966437c7201       kube-proxy-7zmxr                             kube-system
	2346fd5f183a8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   742f9efcb9dbb       etcd-no-preload-998398                       kube-system
	8b465dfa7766b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   8d6e6e4081ba3       kube-controller-manager-no-preload-998398    kube-system
	fef06fef22a94       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   03cd656355dc8       kube-scheduler-no-preload-998398             kube-system
	6d4f60f057762       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   57ef64aea36d6       kube-apiserver-no-preload-998398             kube-system
	
	
	==> coredns [2a55b6efd9bc13cf8923129e73cbfab2aab29f14014b0c88bcb79a3eba86c968] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:43375 - 56030 "HINFO IN 6280205932722633499.109204646060481857. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.005070658s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-998398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-998398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=no-preload-998398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_07_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:06:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-998398
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:08:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-998398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8c95e365f6843b98976e6eaa420070f
	  System UUID:                8be1b8dc-60be-4cac-9ebb-ba90ed9c5cdb
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-7vlmn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-998398                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-6nvxb                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-998398              250m (12%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-998398     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-7zmxr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-998398              100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2dm9r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jplsp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 107s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Normal   Starting                 2m3s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 114s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           109s                 node-controller  Node no-preload-998398 event: Registered Node no-preload-998398 in Controller
	  Normal   NodeReady                93s                  kubelet          Node no-preload-998398 status is now: NodeReady
	  Normal   Starting                 58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node no-preload-998398 event: Registered Node no-preload-998398 in Controller
	
	
	==> dmesg <==
	[Oct13 21:38] overlayfs: idmapped layers are currently not supported
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2346fd5f183a812bd5bdb156c3e135f978ddbc4289db62f027930721e4ad02ce] <==
	{"level":"warn","ts":"2025-10-13T22:07:59.621408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.644423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.657521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.682325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.709352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.729842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.744389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.761785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.777991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.817972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.826953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.840565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.864619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.875682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.895897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.917223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.926604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.944052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.961871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.985809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.004217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.116595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.198661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.241823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.441029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48214","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:08:55 up  1:51,  0 user,  load average: 2.43, 2.58, 2.15
	Linux no-preload-998398 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4ead3ee86a4c785f3394083046b2503b4c6061e4dffda34cd0695c23f44a70f5] <==
	I1013 22:08:02.305701       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:08:02.312241       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:08:02.312403       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:08:02.312416       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:08:02.312431       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:08:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:08:02.524067       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:08:02.524106       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:08:02.524118       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:08:02.524606       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:08:32.509191       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:08:32.510339       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:08:32.524878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:08:32.524988       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 22:08:34.124277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:08:34.124389       1 metrics.go:72] Registering metrics
	I1013 22:08:34.124504       1 controller.go:711] "Syncing nftables rules"
	I1013 22:08:42.508608       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:08:42.508663       1 main.go:301] handling current node
	I1013 22:08:52.515868       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:08:52.515918       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d4f60f057762a629c080a53d21fd933695b93f082a2b7fd989f3d4229ac75c3] <==
	I1013 22:08:01.373135       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:08:01.374125       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:08:01.374140       1 policy_source.go:240] refreshing policies
	I1013 22:08:01.376471       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:08:01.376504       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:08:01.377908       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:08:01.404428       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 22:08:01.428064       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:08:01.440311       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:08:01.440963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:08:01.441307       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:08:01.476088       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:08:01.476972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:08:01.539618       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1013 22:08:01.573219       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:08:02.008487       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:08:02.735625       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:08:02.835342       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:08:02.872647       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:08:02.882764       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:08:02.961853       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.233.115"}
	I1013 22:08:02.989301       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.87.14"}
	I1013 22:08:04.812257       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:08:05.156227       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:08:05.210381       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8b465dfa7766b973d11779f1b004b6b9862a3752d706b497e8911ef92d698e5d] <==
	I1013 22:08:04.724123       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:08:04.726842       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:08:04.731999       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:08:04.732416       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:08:04.747942       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:08:04.750026       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:08:04.750089       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:08:04.750132       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:08:04.750190       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:08:04.766025       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:08:04.776368       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:08:04.782622       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:08:04.795929       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:08:04.799437       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:08:04.807916       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 22:08:04.808292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:08:04.808385       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:08:04.808733       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:08:04.808852       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:08:04.812411       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:08:04.815009       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:08:04.815107       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:08:04.822235       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:08:04.822257       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:08:04.822264       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0fee995561ea043589c7e89d1f5694903510a9b1118ae89adb0ce92a9b49ac46] <==
	I1013 22:08:02.530137       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:08:02.809574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:08:02.915991       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:08:02.916026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:08:02.916116       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:08:03.016171       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:08:03.016249       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:08:03.024980       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:08:03.025601       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:08:03.025994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:08:03.028903       1 config.go:200] "Starting service config controller"
	I1013 22:08:03.029040       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:08:03.029211       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:08:03.029254       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:08:03.029309       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:08:03.029343       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:08:03.037500       1 config.go:309] "Starting node config controller"
	I1013 22:08:03.048683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:08:03.048714       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:08:03.129968       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:08:03.130010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:08:03.130070       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fef06fef22a944406c398dc34d304bcc991484835183ae13edd49c795fa70c38] <==
	I1013 22:07:59.228884       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:08:01.647267       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:08:01.647304       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:08:01.681845       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:08:01.681973       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:08:01.682690       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:08:01.687902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:08:01.682773       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:08:01.689374       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:08:01.693269       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:08:01.693388       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:08:01.783591       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:08:01.790385       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:08:01.791255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: I1013 22:08:05.420878     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/306156f6-40be-4f57-9275-217f328d41ea-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2dm9r\" (UID: \"306156f6-40be-4f57-9275-217f328d41ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r"
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: I1013 22:08:05.420905     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq668\" (UniqueName: \"kubernetes.io/projected/4814b560-6eca-4988-86bf-4b885ba6f1f9-kube-api-access-gq668\") pod \"kubernetes-dashboard-855c9754f9-jplsp\" (UID: \"4814b560-6eca-4988-86bf-4b885ba6f1f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jplsp"
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: I1013 22:08:05.420925     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4814b560-6eca-4988-86bf-4b885ba6f1f9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jplsp\" (UID: \"4814b560-6eca-4988-86bf-4b885ba6f1f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jplsp"
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: W1013 22:08:05.680926     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/crio-d2d8aa7f42c1b7f6f3d497bcf4841d9659ffb5c4d6c852d00f03407e74d16149 WatchSource:0}: Error finding container d2d8aa7f42c1b7f6f3d497bcf4841d9659ffb5c4d6c852d00f03407e74d16149: Status 404 returned error can't find the container with id d2d8aa7f42c1b7f6f3d497bcf4841d9659ffb5c4d6c852d00f03407e74d16149
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: W1013 22:08:05.692586     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/crio-01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72 WatchSource:0}: Error finding container 01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72: Status 404 returned error can't find the container with id 01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72
	Oct 13 22:08:07 no-preload-998398 kubelet[766]: I1013 22:08:07.597554     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:08:14 no-preload-998398 kubelet[766]: I1013 22:08:14.860381     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jplsp" podStartSLOduration=5.184860526 podStartE2EDuration="9.860363378s" podCreationTimestamp="2025-10-13 22:08:05 +0000 UTC" firstStartedPulling="2025-10-13 22:08:05.68583059 +0000 UTC m=+9.664163531" lastFinishedPulling="2025-10-13 22:08:10.361333434 +0000 UTC m=+14.339666383" observedRunningTime="2025-10-13 22:08:11.354436225 +0000 UTC m=+15.332769166" watchObservedRunningTime="2025-10-13 22:08:14.860363378 +0000 UTC m=+18.838696319"
	Oct 13 22:08:15 no-preload-998398 kubelet[766]: I1013 22:08:15.354233     766 scope.go:117] "RemoveContainer" containerID="b4ae0ad6c6a9a7400a103e4e15721195215c7820cff8b36d614a10704bdf0044"
	Oct 13 22:08:16 no-preload-998398 kubelet[766]: I1013 22:08:16.358092     766 scope.go:117] "RemoveContainer" containerID="b4ae0ad6c6a9a7400a103e4e15721195215c7820cff8b36d614a10704bdf0044"
	Oct 13 22:08:16 no-preload-998398 kubelet[766]: I1013 22:08:16.358648     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:16 no-preload-998398 kubelet[766]: E1013 22:08:16.359003     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:17 no-preload-998398 kubelet[766]: I1013 22:08:17.364076     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:17 no-preload-998398 kubelet[766]: E1013 22:08:17.364230     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:23 no-preload-998398 kubelet[766]: I1013 22:08:23.389804     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:23 no-preload-998398 kubelet[766]: E1013 22:08:23.390001     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:33 no-preload-998398 kubelet[766]: I1013 22:08:33.398914     766 scope.go:117] "RemoveContainer" containerID="f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: I1013 22:08:38.170640     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: I1013 22:08:38.413777     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: I1013 22:08:38.414096     766 scope.go:117] "RemoveContainer" containerID="04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: E1013 22:08:38.414267     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:43 no-preload-998398 kubelet[766]: I1013 22:08:43.389171     766 scope.go:117] "RemoveContainer" containerID="04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	Oct 13 22:08:43 no-preload-998398 kubelet[766]: E1013 22:08:43.389802     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:51 no-preload-998398 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:08:51 no-preload-998398 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:08:51 no-preload-998398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5428038f5a6a252f8fdc19fa45aa1ccf971d6760c8dcb30a76f4aa89407b895d] <==
	2025/10/13 22:08:10 Using namespace: kubernetes-dashboard
	2025/10/13 22:08:10 Using in-cluster config to connect to apiserver
	2025/10/13 22:08:10 Using secret token for csrf signing
	2025/10/13 22:08:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:08:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:08:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:08:10 Generating JWE encryption key
	2025/10/13 22:08:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:08:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:08:10 Initializing JWE encryption key from synchronized object
	2025/10/13 22:08:10 Creating in-cluster Sidecar client
	2025/10/13 22:08:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:08:10 Serving insecurely on HTTP port: 9090
	2025/10/13 22:08:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:08:10 Starting overwatch
	
	
	==> storage-provisioner [9f698d98db98d458fdf34808d5107a22ca7839d0ce01eebe0aaa5a78d3fb01b8] <==
	I1013 22:08:33.456838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:08:33.473887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:08:33.473952       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:08:33.477788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:36.932980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:41.192790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:44.790878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:47.844691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:50.866620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:50.873578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:08:50.873732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:08:50.873903       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-998398_f6fa0502-fdba-4b97-85d9-43ba2153b71a!
	I1013 22:08:50.874807       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8efc1af7-8267-476e-8e56-255e4023ebf3", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-998398_f6fa0502-fdba-4b97-85d9-43ba2153b71a became leader
	W1013 22:08:50.879325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:50.887324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:08:50.975519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-998398_f6fa0502-fdba-4b97-85d9-43ba2153b71a!
	W1013 22:08:52.890547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:52.901812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:54.911071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:54.919353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa] <==
	I1013 22:08:02.485070       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:08:32.488579       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998398 -n no-preload-998398
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998398 -n no-preload-998398: exit status 2 (476.477196ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-998398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-998398
helpers_test.go:243: (dbg) docker inspect no-preload-998398:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c",
	        "Created": "2025-10-13T22:06:10.888076989Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:07:49.547990769Z",
	            "FinishedAt": "2025-10-13T22:07:48.734489858Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/hosts",
	        "LogPath": "/var/lib/docker/containers/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c-json.log",
	        "Name": "/no-preload-998398",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-998398:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-998398",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c",
	                "LowerDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/merged",
	                "UpperDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/diff",
	                "WorkDir": "/var/lib/docker/overlay2/499694e6085395b70735b3d3547db65ef3a8c5e98935f88339db5f4531738658/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-998398",
	                "Source": "/var/lib/docker/volumes/no-preload-998398/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-998398",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-998398",
	                "name.minikube.sigs.k8s.io": "no-preload-998398",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63fa82deb7d95770d68e9b31074f389cfb066b2e32a6f1c3a91aad23df2a85d5",
	            "SandboxKey": "/var/run/docker/netns/63fa82deb7d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-998398": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:c2:2f:6d:b2:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "833f6629e3a8d48e88017e58115925d444d24da96413e70671b51381906ca938",
	                    "EndpointID": "b98f6e61bfa1d371beaa908b78be753225734cdec4576d0f023305d072af1d82",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-998398",
	                        "6fb16f37ec05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398: exit status 2 (534.524509ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-998398 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-998398 logs -n 25: (2.058451068s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-194931 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194931    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ delete  │ -p cert-options-194931                                                                                                                                                                                                                        │ cert-options-194931    │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:04 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:04 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-061725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │                     │
	│ stop    │ -p old-k8s-version-061725 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-061725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:05 UTC │
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667 │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725 │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ stop    │ -p no-preload-998398 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ addons  │ enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758     │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398      │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:08:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:08:48.023530  196707 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:08:48.023692  196707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:48.023703  196707 out.go:374] Setting ErrFile to fd 2...
	I1013 22:08:48.023707  196707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:08:48.024077  196707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:08:48.024563  196707 out.go:368] Setting JSON to false
	I1013 22:08:48.025620  196707 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6662,"bootTime":1760386666,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:08:48.025699  196707 start.go:141] virtualization:  
	I1013 22:08:48.028839  196707 out.go:179] * [embed-certs-251758] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:08:48.032780  196707 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:08:48.032902  196707 notify.go:220] Checking for updates...
	I1013 22:08:48.038915  196707 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:08:48.041918  196707 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:08:48.044978  196707 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:08:48.048207  196707 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:08:48.051334  196707 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:08:48.054858  196707 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:08:48.055479  196707 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:08:48.081472  196707 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:08:48.081585  196707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:08:48.138411  196707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:08:48.12869631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:08:48.138517  196707 docker.go:318] overlay module found
	I1013 22:08:48.141759  196707 out.go:179] * Using the docker driver based on existing profile
	I1013 22:08:48.144535  196707 start.go:305] selected driver: docker
	I1013 22:08:48.144553  196707 start.go:925] validating driver "docker" against &{Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:08:48.144642  196707 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:08:48.145359  196707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:08:48.220421  196707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:08:48.211448493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:08:48.220755  196707 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:08:48.220788  196707 cni.go:84] Creating CNI manager for ""
	I1013 22:08:48.220853  196707 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:08:48.220895  196707 start.go:349] cluster config:
	{Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:08:48.224167  196707 out.go:179] * Starting "embed-certs-251758" primary control-plane node in "embed-certs-251758" cluster
	I1013 22:08:48.227189  196707 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:08:48.230172  196707 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:08:48.233068  196707 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:08:48.233130  196707 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:08:48.233143  196707 cache.go:58] Caching tarball of preloaded images
	I1013 22:08:48.233166  196707 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:08:48.233227  196707 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:08:48.233238  196707 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:08:48.233353  196707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/config.json ...
	I1013 22:08:48.252886  196707 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:08:48.252905  196707 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:08:48.252925  196707 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:08:48.252950  196707 start.go:360] acquireMachinesLock for embed-certs-251758: {Name:mk516ca80db4149cf875ca7692ac1e5faffe2cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:08:48.253008  196707 start.go:364] duration metric: took 35.462µs to acquireMachinesLock for "embed-certs-251758"
	I1013 22:08:48.253027  196707 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:08:48.253038  196707 fix.go:54] fixHost starting: 
	I1013 22:08:48.253375  196707 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:08:48.270370  196707 fix.go:112] recreateIfNeeded on embed-certs-251758: state=Stopped err=<nil>
	W1013 22:08:48.270405  196707 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:08:48.273620  196707 out.go:252] * Restarting existing docker container for "embed-certs-251758" ...
	I1013 22:08:48.273722  196707 cli_runner.go:164] Run: docker start embed-certs-251758
	I1013 22:08:48.544410  196707 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:08:48.570193  196707 kic.go:430] container "embed-certs-251758" state is running.
	I1013 22:08:48.570580  196707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:08:48.599409  196707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/config.json ...
	I1013 22:08:48.599751  196707 machine.go:93] provisionDockerMachine start ...
	I1013 22:08:48.599897  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:48.618708  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:48.619030  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:48.619046  196707 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:08:48.619910  196707 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:08:51.767838  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-251758
	
	I1013 22:08:51.767902  196707 ubuntu.go:182] provisioning hostname "embed-certs-251758"
	I1013 22:08:51.767979  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:51.790943  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:51.791252  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:51.791264  196707 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-251758 && echo "embed-certs-251758" | sudo tee /etc/hostname
	I1013 22:08:51.958949  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-251758
	
	I1013 22:08:51.959035  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:51.982423  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:51.982725  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:51.982742  196707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-251758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-251758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-251758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:08:52.147955  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:08:52.147984  196707 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:08:52.148007  196707 ubuntu.go:190] setting up certificates
	I1013 22:08:52.148018  196707 provision.go:84] configureAuth start
	I1013 22:08:52.148075  196707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:08:52.165220  196707 provision.go:143] copyHostCerts
	I1013 22:08:52.165298  196707 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:08:52.165321  196707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:08:52.165399  196707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:08:52.165508  196707 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:08:52.165519  196707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:08:52.165553  196707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:08:52.165625  196707 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:08:52.165634  196707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:08:52.165659  196707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:08:52.165721  196707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.embed-certs-251758 san=[127.0.0.1 192.168.85.2 embed-certs-251758 localhost minikube]
	I1013 22:08:52.322392  196707 provision.go:177] copyRemoteCerts
	I1013 22:08:52.322454  196707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:08:52.322497  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:52.339819  196707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:08:52.449062  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:08:52.474013  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:08:52.501052  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1013 22:08:52.534173  196707 provision.go:87] duration metric: took 386.130688ms to configureAuth
	I1013 22:08:52.534197  196707 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:08:52.534381  196707 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:08:52.534487  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:52.558078  196707 main.go:141] libmachine: Using SSH client type: native
	I1013 22:08:52.558379  196707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1013 22:08:52.558395  196707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:08:52.895005  196707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:08:52.895029  196707 machine.go:96] duration metric: took 4.295264084s to provisionDockerMachine
	I1013 22:08:52.895040  196707 start.go:293] postStartSetup for "embed-certs-251758" (driver="docker")
	I1013 22:08:52.895051  196707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:08:52.895126  196707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:08:52.895170  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:52.921671  196707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:08:53.027997  196707 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:08:53.031310  196707 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:08:53.031338  196707 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:08:53.031377  196707 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:08:53.031449  196707 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:08:53.031530  196707 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:08:53.031638  196707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:08:53.039014  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:08:53.058067  196707 start.go:296] duration metric: took 163.012023ms for postStartSetup
	I1013 22:08:53.058161  196707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:08:53.058218  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:53.078521  196707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:08:53.182454  196707 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:08:53.188090  196707 fix.go:56] duration metric: took 4.935049768s for fixHost
	I1013 22:08:53.188112  196707 start.go:83] releasing machines lock for "embed-certs-251758", held for 4.935095321s
	I1013 22:08:53.188177  196707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-251758
	I1013 22:08:53.206802  196707 ssh_runner.go:195] Run: cat /version.json
	I1013 22:08:53.206855  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:53.207077  196707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:08:53.207136  196707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:08:53.224712  196707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:08:53.248367  196707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:08:53.339635  196707 ssh_runner.go:195] Run: systemctl --version
	I1013 22:08:53.453757  196707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:08:53.501697  196707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:08:53.506993  196707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:08:53.507056  196707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:08:53.518399  196707 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:08:53.518420  196707 start.go:495] detecting cgroup driver to use...
	I1013 22:08:53.518450  196707 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:08:53.518508  196707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:08:53.538762  196707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:08:53.553177  196707 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:08:53.553234  196707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:08:53.571649  196707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:08:53.586690  196707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:08:53.734939  196707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:08:53.896541  196707 docker.go:234] disabling docker service ...
	I1013 22:08:53.896607  196707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:08:53.914719  196707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:08:53.929103  196707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:08:54.080216  196707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:08:54.230999  196707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:08:54.245868  196707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:08:54.262821  196707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:08:54.262884  196707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:08:54.272345  196707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:08:54.272407  196707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:08:54.284249  196707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:08:54.294785  196707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:08:54.304415  196707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:08:54.314388  196707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:08:54.323607  196707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:08:54.339601  196707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:08:54.350561  196707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:08:54.358685  196707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:08:54.368946  196707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:08:54.519434  196707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:08:54.668079  196707 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:08:54.668143  196707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:08:54.676390  196707 start.go:563] Will wait 60s for crictl version
	I1013 22:08:54.676465  196707 ssh_runner.go:195] Run: which crictl
	I1013 22:08:54.679987  196707 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:08:54.709217  196707 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:08:54.709306  196707 ssh_runner.go:195] Run: crio --version
	I1013 22:08:54.749678  196707 ssh_runner.go:195] Run: crio --version
	I1013 22:08:54.790040  196707 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:08:54.792918  196707 cli_runner.go:164] Run: docker network inspect embed-certs-251758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:08:54.813384  196707 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:08:54.818361  196707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:08:54.832942  196707 kubeadm.go:883] updating cluster {Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:08:54.833068  196707 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:08:54.833119  196707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:08:54.883194  196707 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:08:54.883214  196707 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:08:54.883268  196707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:08:54.934376  196707 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:08:54.934400  196707 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:08:54.934409  196707 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:08:54.934520  196707 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-251758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:08:54.934609  196707 ssh_runner.go:195] Run: crio config
	I1013 22:08:55.014709  196707 cni.go:84] Creating CNI manager for ""
	I1013 22:08:55.014735  196707 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:08:55.014907  196707 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:08:55.014939  196707 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-251758 NodeName:embed-certs-251758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:08:55.015085  196707 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-251758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:08:55.015168  196707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:08:55.024471  196707 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:08:55.024548  196707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:08:55.034802  196707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1013 22:08:55.053321  196707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:08:55.069661  196707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1013 22:08:55.087177  196707 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:08:55.091664  196707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:08:55.103064  196707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:08:55.252879  196707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:08:55.273320  196707 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758 for IP: 192.168.85.2
	I1013 22:08:55.273338  196707 certs.go:195] generating shared ca certs ...
	I1013 22:08:55.273355  196707 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:08:55.273485  196707 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:08:55.273535  196707 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:08:55.273551  196707 certs.go:257] generating profile certs ...
	I1013 22:08:55.273647  196707 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/client.key
	I1013 22:08:55.273719  196707 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key.3c24f2a0
	I1013 22:08:55.273769  196707 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.key
	I1013 22:08:55.273872  196707 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:08:55.273904  196707 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:08:55.273916  196707 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:08:55.273942  196707 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:08:55.273967  196707 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:08:55.273991  196707 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:08:55.274034  196707 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:08:55.274623  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:08:55.304959  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:08:55.326762  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:08:55.367410  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:08:55.399480  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1013 22:08:55.435524  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:08:55.475157  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:08:55.519671  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/embed-certs-251758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:08:55.552413  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:08:55.598019  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:08:55.648166  196707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:08:55.678576  196707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:08:55.711325  196707 ssh_runner.go:195] Run: openssl version
	I1013 22:08:55.719027  196707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:08:55.727456  196707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:08:55.731968  196707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:08:55.732035  196707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:08:55.779513  196707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:08:55.794208  196707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:08:55.803294  196707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:08:55.806914  196707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:08:55.806984  196707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:08:55.857409  196707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:08:55.868411  196707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:08:55.879469  196707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:08:55.883353  196707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:08:55.883414  196707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:08:55.925868  196707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:08:55.933683  196707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:08:55.938194  196707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:08:55.986466  196707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:08:56.081175  196707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:08:56.224594  196707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:08:56.420059  196707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:08:56.502861  196707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:08:56.586896  196707 kubeadm.go:400] StartCluster: {Name:embed-certs-251758 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-251758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:08:56.586977  196707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:08:56.587043  196707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:08:56.677195  196707 cri.go:89] found id: "aae750af84a55af314225c0685c8ae60d5e9a75591e1edfaf24d63b4ef9dacec"
	I1013 22:08:56.677213  196707 cri.go:89] found id: "584a98c7ea4404c695d25b77ddef1fab1aca6fa39f58483da8a818a558fb996c"
	I1013 22:08:56.677217  196707 cri.go:89] found id: "9c6989f62c1172b0c0f363d4229f6d6e18f8427d7c917aa15eacb2457bfad0a2"
	I1013 22:08:56.677221  196707 cri.go:89] found id: "5f76aa65f805b69f7a41cf737a66368820d106aa35a1bd6fad89654cbc4c61aa"
	I1013 22:08:56.677243  196707 cri.go:89] found id: ""
	I1013 22:08:56.677291  196707 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:08:56.704284  196707 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:08:56Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:08:56.704361  196707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:08:56.735997  196707 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:08:56.736012  196707 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:08:56.736077  196707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:08:56.768030  196707 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:08:56.768613  196707 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-251758" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:08:56.768863  196707 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-2495/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-251758" cluster setting kubeconfig missing "embed-certs-251758" context setting]
	I1013 22:08:56.769342  196707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:08:56.770692  196707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:08:56.803601  196707 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 22:08:56.803632  196707 kubeadm.go:601] duration metric: took 67.612889ms to restartPrimaryControlPlane
	I1013 22:08:56.803641  196707 kubeadm.go:402] duration metric: took 216.755223ms to StartCluster
	I1013 22:08:56.803655  196707 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:08:56.803727  196707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:08:56.805108  196707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:08:56.805341  196707 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:08:56.805578  196707 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:08:56.805623  196707 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:08:56.805691  196707 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-251758"
	I1013 22:08:56.805707  196707 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-251758"
	W1013 22:08:56.805714  196707 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:08:56.805737  196707 host.go:66] Checking if "embed-certs-251758" exists ...
	I1013 22:08:56.806161  196707 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:08:56.809265  196707 addons.go:69] Setting dashboard=true in profile "embed-certs-251758"
	I1013 22:08:56.809368  196707 addons.go:238] Setting addon dashboard=true in "embed-certs-251758"
	W1013 22:08:56.809392  196707 addons.go:247] addon dashboard should already be in state true
	I1013 22:08:56.809430  196707 host.go:66] Checking if "embed-certs-251758" exists ...
	I1013 22:08:56.809660  196707 addons.go:69] Setting default-storageclass=true in profile "embed-certs-251758"
	I1013 22:08:56.809692  196707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-251758"
	I1013 22:08:56.809983  196707 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:08:56.810111  196707 out.go:179] * Verifying Kubernetes components...
	
	
	==> CRI-O <==
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.171259171Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9e9f31d2-6b33-47ea-83d5-1835c85176ec name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.172494685Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e44f458e-d8d1-42ea-ae23-9b534644f7d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.173502052Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper" id=27735105-e541-4fd2-bc71-f9a3070f5e06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.173704615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.180707179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.181331541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.196285165Z" level=info msg="Created container 04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper" id=27735105-e541-4fd2-bc71-f9a3070f5e06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.197288857Z" level=info msg="Starting container: 04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd" id=725806ce-3b4d-4eeb-ae22-965623ecfef0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.200099141Z" level=info msg="Started container" PID=1637 containerID=04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper id=725806ce-3b4d-4eeb-ae22-965623ecfef0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72
	Oct 13 22:08:38 no-preload-998398 conmon[1635]: conmon 04ee584de5d90123435d <ninfo>: container 1637 exited with status 1
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.416123188Z" level=info msg="Removing container: 301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27" id=f84bdc22-97eb-428d-91c8-151a2fc3e15f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.423702607Z" level=info msg="Error loading conmon cgroup of container 301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27: cgroup deleted" id=f84bdc22-97eb-428d-91c8-151a2fc3e15f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:08:38 no-preload-998398 crio[650]: time="2025-10-13T22:08:38.427341268Z" level=info msg="Removed container 301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r/dashboard-metrics-scraper" id=f84bdc22-97eb-428d-91c8-151a2fc3e15f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.508934012Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.513095991Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.513130378Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.51315528Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.516367605Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.516399547Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.51642394Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.519397206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.519427548Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.519454231Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.522435177Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:08:42 no-preload-998398 crio[650]: time="2025-10-13T22:08:42.522466905Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	04ee584de5d90       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   01134f09e8878       dashboard-metrics-scraper-6ffb444bf9-2dm9r   kubernetes-dashboard
	9f698d98db98d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   1b4c9d7dda457       storage-provisioner                          kube-system
	5428038f5a6a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   d2d8aa7f42c1b       kubernetes-dashboard-855c9754f9-jplsp        kubernetes-dashboard
	2a55b6efd9bc1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   ab52e218829ee       coredns-66bc5c9577-7vlmn                     kube-system
	2703726f04b00       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   fee21fd98b1af       busybox                                      default
	f9a58a4b4f83d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   1b4c9d7dda457       storage-provisioner                          kube-system
	4ead3ee86a4c7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   b9b36f6d44f15       kindnet-6nvxb                                kube-system
	0fee995561ea0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   b5966437c7201       kube-proxy-7zmxr                             kube-system
	2346fd5f183a8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   742f9efcb9dbb       etcd-no-preload-998398                       kube-system
	8b465dfa7766b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   8d6e6e4081ba3       kube-controller-manager-no-preload-998398    kube-system
	fef06fef22a94       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   03cd656355dc8       kube-scheduler-no-preload-998398             kube-system
	6d4f60f057762       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   57ef64aea36d6       kube-apiserver-no-preload-998398             kube-system
	
	
	==> coredns [2a55b6efd9bc13cf8923129e73cbfab2aab29f14014b0c88bcb79a3eba86c968] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:43375 - 56030 "HINFO IN 6280205932722633499.109204646060481857. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.005070658s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-998398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-998398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=no-preload-998398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_07_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:06:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-998398
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:08:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:06:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:08:42 +0000   Mon, 13 Oct 2025 22:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-998398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8c95e365f6843b98976e6eaa420070f
	  System UUID:                8be1b8dc-60be-4cac-9ebb-ba90ed9c5cdb
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-7vlmn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-998398                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-6nvxb                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-998398              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-998398     200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-7zmxr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-998398              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2dm9r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jplsp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 110s                 kube-proxy       
	  Normal   Starting                 55s                  kube-proxy       
	  Normal   Starting                 2m7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 118s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s                 kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  117s                 kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-998398 event: Registered Node no-preload-998398 in Controller
	  Normal   NodeReady                97s                  kubelet          Node no-preload-998398 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-998398 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-998398 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-998398 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                  node-controller  Node no-preload-998398 event: Registered Node no-preload-998398 in Controller
	
	
	==> dmesg <==
	[Oct13 21:39] overlayfs: idmapped layers are currently not supported
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2346fd5f183a812bd5bdb156c3e135f978ddbc4289db62f027930721e4ad02ce] <==
	{"level":"warn","ts":"2025-10-13T22:07:59.621408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.644423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.657521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.682325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.709352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.729842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.744389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.761785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.777991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.817972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.826953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.840565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.864619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.875682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.895897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.917223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.926604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.944052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.961871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:07:59.985809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.004217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.116595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.198661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.241823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:08:00.441029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48214","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:08:58 up  1:51,  0 user,  load average: 3.20, 2.74, 2.20
	Linux no-preload-998398 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4ead3ee86a4c785f3394083046b2503b4c6061e4dffda34cd0695c23f44a70f5] <==
	I1013 22:08:02.305701       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:08:02.312241       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:08:02.312403       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:08:02.312416       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:08:02.312431       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:08:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:08:02.524067       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:08:02.524106       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:08:02.524118       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:08:02.524606       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:08:32.509191       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:08:32.510339       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:08:32.524878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:08:32.524988       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 22:08:34.124277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:08:34.124389       1 metrics.go:72] Registering metrics
	I1013 22:08:34.124504       1 controller.go:711] "Syncing nftables rules"
	I1013 22:08:42.508608       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:08:42.508663       1 main.go:301] handling current node
	I1013 22:08:52.515868       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:08:52.515918       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d4f60f057762a629c080a53d21fd933695b93f082a2b7fd989f3d4229ac75c3] <==
	I1013 22:08:01.373135       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:08:01.374125       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1013 22:08:01.374140       1 policy_source.go:240] refreshing policies
	I1013 22:08:01.376471       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1013 22:08:01.376504       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:08:01.377908       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:08:01.404428       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 22:08:01.428064       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:08:01.440311       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:08:01.440963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:08:01.441307       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:08:01.476088       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:08:01.476972       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:08:01.539618       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1013 22:08:01.573219       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:08:02.008487       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:08:02.735625       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:08:02.835342       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:08:02.872647       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:08:02.882764       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:08:02.961853       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.233.115"}
	I1013 22:08:02.989301       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.87.14"}
	I1013 22:08:04.812257       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:08:05.156227       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:08:05.210381       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8b465dfa7766b973d11779f1b004b6b9862a3752d706b497e8911ef92d698e5d] <==
	I1013 22:08:04.724123       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:08:04.726842       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:08:04.731999       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:08:04.732416       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1013 22:08:04.747942       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:08:04.750026       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:08:04.750089       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:08:04.750132       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:08:04.750190       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:08:04.766025       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:08:04.776368       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:08:04.782622       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:08:04.795929       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:08:04.799437       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:08:04.807916       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 22:08:04.808292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:08:04.808385       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:08:04.808733       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:08:04.808852       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:08:04.812411       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:08:04.815009       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:08:04.815107       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:08:04.822235       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:08:04.822257       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:08:04.822264       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0fee995561ea043589c7e89d1f5694903510a9b1118ae89adb0ce92a9b49ac46] <==
	I1013 22:08:02.530137       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:08:02.809574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:08:02.915991       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:08:02.916026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:08:02.916116       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:08:03.016171       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:08:03.016249       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:08:03.024980       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:08:03.025601       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:08:03.025994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:08:03.028903       1 config.go:200] "Starting service config controller"
	I1013 22:08:03.029040       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:08:03.029211       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:08:03.029254       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:08:03.029309       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:08:03.029343       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:08:03.037500       1 config.go:309] "Starting node config controller"
	I1013 22:08:03.048683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:08:03.048714       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:08:03.129968       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:08:03.130010       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:08:03.130070       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fef06fef22a944406c398dc34d304bcc991484835183ae13edd49c795fa70c38] <==
	I1013 22:07:59.228884       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:08:01.647267       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:08:01.647304       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:08:01.681845       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:08:01.681973       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:08:01.682690       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:08:01.687902       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:08:01.682773       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:08:01.689374       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:08:01.693269       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:08:01.693388       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:08:01.783591       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:08:01.790385       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:08:01.791255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: I1013 22:08:05.420878     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/306156f6-40be-4f57-9275-217f328d41ea-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2dm9r\" (UID: \"306156f6-40be-4f57-9275-217f328d41ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r"
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: I1013 22:08:05.420905     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq668\" (UniqueName: \"kubernetes.io/projected/4814b560-6eca-4988-86bf-4b885ba6f1f9-kube-api-access-gq668\") pod \"kubernetes-dashboard-855c9754f9-jplsp\" (UID: \"4814b560-6eca-4988-86bf-4b885ba6f1f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jplsp"
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: I1013 22:08:05.420925     766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4814b560-6eca-4988-86bf-4b885ba6f1f9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jplsp\" (UID: \"4814b560-6eca-4988-86bf-4b885ba6f1f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jplsp"
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: W1013 22:08:05.680926     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/crio-d2d8aa7f42c1b7f6f3d497bcf4841d9659ffb5c4d6c852d00f03407e74d16149 WatchSource:0}: Error finding container d2d8aa7f42c1b7f6f3d497bcf4841d9659ffb5c4d6c852d00f03407e74d16149: Status 404 returned error can't find the container with id d2d8aa7f42c1b7f6f3d497bcf4841d9659ffb5c4d6c852d00f03407e74d16149
	Oct 13 22:08:05 no-preload-998398 kubelet[766]: W1013 22:08:05.692586     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6fb16f37ec05da7de816bc3e9c2e323940cdc45e442a73f88f3547ae26c3416c/crio-01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72 WatchSource:0}: Error finding container 01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72: Status 404 returned error can't find the container with id 01134f09e887822bf1853d65bb5cc14aa2450c39f37b8c8517b202846cbd7d72
	Oct 13 22:08:07 no-preload-998398 kubelet[766]: I1013 22:08:07.597554     766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 13 22:08:14 no-preload-998398 kubelet[766]: I1013 22:08:14.860381     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jplsp" podStartSLOduration=5.184860526 podStartE2EDuration="9.860363378s" podCreationTimestamp="2025-10-13 22:08:05 +0000 UTC" firstStartedPulling="2025-10-13 22:08:05.68583059 +0000 UTC m=+9.664163531" lastFinishedPulling="2025-10-13 22:08:10.361333434 +0000 UTC m=+14.339666383" observedRunningTime="2025-10-13 22:08:11.354436225 +0000 UTC m=+15.332769166" watchObservedRunningTime="2025-10-13 22:08:14.860363378 +0000 UTC m=+18.838696319"
	Oct 13 22:08:15 no-preload-998398 kubelet[766]: I1013 22:08:15.354233     766 scope.go:117] "RemoveContainer" containerID="b4ae0ad6c6a9a7400a103e4e15721195215c7820cff8b36d614a10704bdf0044"
	Oct 13 22:08:16 no-preload-998398 kubelet[766]: I1013 22:08:16.358092     766 scope.go:117] "RemoveContainer" containerID="b4ae0ad6c6a9a7400a103e4e15721195215c7820cff8b36d614a10704bdf0044"
	Oct 13 22:08:16 no-preload-998398 kubelet[766]: I1013 22:08:16.358648     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:16 no-preload-998398 kubelet[766]: E1013 22:08:16.359003     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:17 no-preload-998398 kubelet[766]: I1013 22:08:17.364076     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:17 no-preload-998398 kubelet[766]: E1013 22:08:17.364230     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:23 no-preload-998398 kubelet[766]: I1013 22:08:23.389804     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:23 no-preload-998398 kubelet[766]: E1013 22:08:23.390001     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:33 no-preload-998398 kubelet[766]: I1013 22:08:33.398914     766 scope.go:117] "RemoveContainer" containerID="f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: I1013 22:08:38.170640     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: I1013 22:08:38.413777     766 scope.go:117] "RemoveContainer" containerID="301cfdfcce72e85368320fc5419f7232e4524ec8cb624f3e16f0554ad3aa8a27"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: I1013 22:08:38.414096     766 scope.go:117] "RemoveContainer" containerID="04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	Oct 13 22:08:38 no-preload-998398 kubelet[766]: E1013 22:08:38.414267     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:43 no-preload-998398 kubelet[766]: I1013 22:08:43.389171     766 scope.go:117] "RemoveContainer" containerID="04ee584de5d90123435d17ff80282c3b59cf8e1ac47d8fab84499bbf30b171fd"
	Oct 13 22:08:43 no-preload-998398 kubelet[766]: E1013 22:08:43.389802     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2dm9r_kubernetes-dashboard(306156f6-40be-4f57-9275-217f328d41ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2dm9r" podUID="306156f6-40be-4f57-9275-217f328d41ea"
	Oct 13 22:08:51 no-preload-998398 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:08:51 no-preload-998398 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:08:51 no-preload-998398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5428038f5a6a252f8fdc19fa45aa1ccf971d6760c8dcb30a76f4aa89407b895d] <==
	2025/10/13 22:08:10 Using namespace: kubernetes-dashboard
	2025/10/13 22:08:10 Using in-cluster config to connect to apiserver
	2025/10/13 22:08:10 Using secret token for csrf signing
	2025/10/13 22:08:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:08:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:08:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:08:10 Generating JWE encryption key
	2025/10/13 22:08:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:08:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:08:10 Initializing JWE encryption key from synchronized object
	2025/10/13 22:08:10 Creating in-cluster Sidecar client
	2025/10/13 22:08:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:08:10 Serving insecurely on HTTP port: 9090
	2025/10/13 22:08:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:08:10 Starting overwatch
	
	
	==> storage-provisioner [9f698d98db98d458fdf34808d5107a22ca7839d0ce01eebe0aaa5a78d3fb01b8] <==
	I1013 22:08:33.456838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:08:33.473887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:08:33.473952       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:08:33.477788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:36.932980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:41.192790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:44.790878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:47.844691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:50.866620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:50.873578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:08:50.873732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:08:50.873903       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-998398_f6fa0502-fdba-4b97-85d9-43ba2153b71a!
	I1013 22:08:50.874807       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8efc1af7-8267-476e-8e56-255e4023ebf3", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-998398_f6fa0502-fdba-4b97-85d9-43ba2153b71a became leader
	W1013 22:08:50.879325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:50.887324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:08:50.975519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-998398_f6fa0502-fdba-4b97-85d9-43ba2153b71a!
	W1013 22:08:52.890547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:52.901812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:54.911071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:54.919353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:56.928162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:08:56.940293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f9a58a4b4f83dd1b8124e9a1d5cda8f33c16e1dbdec7513288183b73c61ce6aa] <==
	I1013 22:08:02.485070       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:08:32.488579       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998398 -n no-preload-998398
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998398 -n no-preload-998398: exit status 2 (522.625222ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-998398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-251758 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-251758 --alsologtostderr -v=1: exit status 80 (1.785491292s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-251758 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:09:51.376036  202752 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:09:51.376493  202752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:09:51.376507  202752 out.go:374] Setting ErrFile to fd 2...
	I1013 22:09:51.376512  202752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:09:51.377202  202752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:09:51.377713  202752 out.go:368] Setting JSON to false
	I1013 22:09:51.377813  202752 mustload.go:65] Loading cluster: embed-certs-251758
	I1013 22:09:51.378502  202752 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:09:51.379244  202752 cli_runner.go:164] Run: docker container inspect embed-certs-251758 --format={{.State.Status}}
	I1013 22:09:51.401578  202752 host.go:66] Checking if "embed-certs-251758" exists ...
	I1013 22:09:51.401884  202752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:09:51.461588  202752 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 22:09:51.452562773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:09:51.462269  202752 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-251758 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:09:51.466322  202752 out.go:179] * Pausing node embed-certs-251758 ... 
	I1013 22:09:51.470874  202752 host.go:66] Checking if "embed-certs-251758" exists ...
	I1013 22:09:51.471225  202752 ssh_runner.go:195] Run: systemctl --version
	I1013 22:09:51.471276  202752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-251758
	I1013 22:09:51.488566  202752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/embed-certs-251758/id_rsa Username:docker}
	I1013 22:09:51.590189  202752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:09:51.615865  202752 pause.go:52] kubelet running: true
	I1013 22:09:51.615929  202752 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:09:51.867108  202752 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:09:51.867220  202752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:09:51.940635  202752 cri.go:89] found id: "6fc9ddef880b60d90c1173448c140aac193f748165971840fb9a1cfdc4aa1d70"
	I1013 22:09:51.940661  202752 cri.go:89] found id: "867161f640012fdfebf1dab5ae2b56691570ae088f99d8eb681cb0a4d8504d85"
	I1013 22:09:51.940667  202752 cri.go:89] found id: "79d9d302bbf1a83c8987224c8f4facc00eeb460f55b3bfa8c4bf25cd20012882"
	I1013 22:09:51.940671  202752 cri.go:89] found id: "6d3d93734554a1ed7c0be2214b61d891319ff15030b44abcfa42a8cad20a6268"
	I1013 22:09:51.940675  202752 cri.go:89] found id: "90dfa1eb353c58a620786cbe9e0b45cd92ede40e22ea29e7b93ccc4a41008baf"
	I1013 22:09:51.940679  202752 cri.go:89] found id: "aae750af84a55af314225c0685c8ae60d5e9a75591e1edfaf24d63b4ef9dacec"
	I1013 22:09:51.940701  202752 cri.go:89] found id: "584a98c7ea4404c695d25b77ddef1fab1aca6fa39f58483da8a818a558fb996c"
	I1013 22:09:51.940713  202752 cri.go:89] found id: "9c6989f62c1172b0c0f363d4229f6d6e18f8427d7c917aa15eacb2457bfad0a2"
	I1013 22:09:51.940718  202752 cri.go:89] found id: "5f76aa65f805b69f7a41cf737a66368820d106aa35a1bd6fad89654cbc4c61aa"
	I1013 22:09:51.940725  202752 cri.go:89] found id: "c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae"
	I1013 22:09:51.940733  202752 cri.go:89] found id: "a7b4e396ad98c890c97a0efa0e70d34ea49729a1e195184cb861210865588c8c"
	I1013 22:09:51.940736  202752 cri.go:89] found id: ""
	I1013 22:09:51.940797  202752 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:09:51.951844  202752 retry.go:31] will retry after 332.316378ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:09:51Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:09:52.285239  202752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:09:52.298085  202752 pause.go:52] kubelet running: false
	I1013 22:09:52.298186  202752 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:09:52.470577  202752 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:09:52.470661  202752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:09:52.554727  202752 cri.go:89] found id: "6fc9ddef880b60d90c1173448c140aac193f748165971840fb9a1cfdc4aa1d70"
	I1013 22:09:52.554751  202752 cri.go:89] found id: "867161f640012fdfebf1dab5ae2b56691570ae088f99d8eb681cb0a4d8504d85"
	I1013 22:09:52.554757  202752 cri.go:89] found id: "79d9d302bbf1a83c8987224c8f4facc00eeb460f55b3bfa8c4bf25cd20012882"
	I1013 22:09:52.554761  202752 cri.go:89] found id: "6d3d93734554a1ed7c0be2214b61d891319ff15030b44abcfa42a8cad20a6268"
	I1013 22:09:52.554764  202752 cri.go:89] found id: "90dfa1eb353c58a620786cbe9e0b45cd92ede40e22ea29e7b93ccc4a41008baf"
	I1013 22:09:52.554768  202752 cri.go:89] found id: "aae750af84a55af314225c0685c8ae60d5e9a75591e1edfaf24d63b4ef9dacec"
	I1013 22:09:52.554771  202752 cri.go:89] found id: "584a98c7ea4404c695d25b77ddef1fab1aca6fa39f58483da8a818a558fb996c"
	I1013 22:09:52.554773  202752 cri.go:89] found id: "9c6989f62c1172b0c0f363d4229f6d6e18f8427d7c917aa15eacb2457bfad0a2"
	I1013 22:09:52.554776  202752 cri.go:89] found id: "5f76aa65f805b69f7a41cf737a66368820d106aa35a1bd6fad89654cbc4c61aa"
	I1013 22:09:52.554803  202752 cri.go:89] found id: "c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae"
	I1013 22:09:52.554814  202752 cri.go:89] found id: "a7b4e396ad98c890c97a0efa0e70d34ea49729a1e195184cb861210865588c8c"
	I1013 22:09:52.554818  202752 cri.go:89] found id: ""
	I1013 22:09:52.554915  202752 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:09:52.566538  202752 retry.go:31] will retry after 237.922593ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:09:52Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:09:52.805050  202752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:09:52.817777  202752 pause.go:52] kubelet running: false
	I1013 22:09:52.817921  202752 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:09:52.987474  202752 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:09:52.987596  202752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:09:53.067259  202752 cri.go:89] found id: "6fc9ddef880b60d90c1173448c140aac193f748165971840fb9a1cfdc4aa1d70"
	I1013 22:09:53.067279  202752 cri.go:89] found id: "867161f640012fdfebf1dab5ae2b56691570ae088f99d8eb681cb0a4d8504d85"
	I1013 22:09:53.067283  202752 cri.go:89] found id: "79d9d302bbf1a83c8987224c8f4facc00eeb460f55b3bfa8c4bf25cd20012882"
	I1013 22:09:53.067287  202752 cri.go:89] found id: "6d3d93734554a1ed7c0be2214b61d891319ff15030b44abcfa42a8cad20a6268"
	I1013 22:09:53.067290  202752 cri.go:89] found id: "90dfa1eb353c58a620786cbe9e0b45cd92ede40e22ea29e7b93ccc4a41008baf"
	I1013 22:09:53.067293  202752 cri.go:89] found id: "aae750af84a55af314225c0685c8ae60d5e9a75591e1edfaf24d63b4ef9dacec"
	I1013 22:09:53.067320  202752 cri.go:89] found id: "584a98c7ea4404c695d25b77ddef1fab1aca6fa39f58483da8a818a558fb996c"
	I1013 22:09:53.067332  202752 cri.go:89] found id: "9c6989f62c1172b0c0f363d4229f6d6e18f8427d7c917aa15eacb2457bfad0a2"
	I1013 22:09:53.067336  202752 cri.go:89] found id: "5f76aa65f805b69f7a41cf737a66368820d106aa35a1bd6fad89654cbc4c61aa"
	I1013 22:09:53.067342  202752 cri.go:89] found id: "c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae"
	I1013 22:09:53.067345  202752 cri.go:89] found id: "a7b4e396ad98c890c97a0efa0e70d34ea49729a1e195184cb861210865588c8c"
	I1013 22:09:53.067348  202752 cri.go:89] found id: ""
	I1013 22:09:53.067451  202752 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:09:53.082482  202752 out.go:203] 
	W1013 22:09:53.085693  202752 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:09:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:09:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:09:53.085718  202752 out.go:285] * 
	* 
	W1013 22:09:53.091968  202752 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:09:53.097039  202752 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-251758 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-251758
helpers_test.go:243: (dbg) docker inspect embed-certs-251758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396",
	        "Created": "2025-10-13T22:07:07.277688258Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196834,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:08:48.305179534Z",
	            "FinishedAt": "2025-10-13T22:08:47.488726439Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/hostname",
	        "HostsPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/hosts",
	        "LogPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396-json.log",
	        "Name": "/embed-certs-251758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-251758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-251758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396",
	                "LowerDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-251758",
	                "Source": "/var/lib/docker/volumes/embed-certs-251758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-251758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-251758",
	                "name.minikube.sigs.k8s.io": "embed-certs-251758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8db7f867bf1bf3a517399cdce59e6ef9a51b677c5e79cf783a537b6b9f9db3a8",
	            "SandboxKey": "/var/run/docker/netns/8db7f867bf1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-251758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:96:18:34:21:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b9096ba29d296c438f9a557fd2db13e4e114de39426eb54481a5b79f96f151ea",
	                    "EndpointID": "bc1f1b45dd592e3a0fbf5c10346635ab9dbadfa55039de5e214ec4139090d231",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-251758",
	                        "bce2b62de8b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758: exit status 2 (343.455748ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-251758 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-251758 logs -n 25: (1.307906991s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667       │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ stop    │ -p no-preload-998398 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ addons  │ enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:09:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:09:03.469276  199649 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:09:03.469509  199649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:09:03.469538  199649 out.go:374] Setting ErrFile to fd 2...
	I1013 22:09:03.469566  199649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:09:03.469862  199649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:09:03.470350  199649 out.go:368] Setting JSON to false
	I1013 22:09:03.471387  199649 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6678,"bootTime":1760386666,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:09:03.471482  199649 start.go:141] virtualization:  
	I1013 22:09:03.475167  199649 out.go:179] * [default-k8s-diff-port-007533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:09:03.478180  199649 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:09:03.478252  199649 notify.go:220] Checking for updates...
	I1013 22:09:03.484390  199649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:09:03.487398  199649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:09:03.490440  199649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:09:03.493889  199649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:09:03.496797  199649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:09:03.500281  199649 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:09:03.500460  199649 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:09:03.546966  199649 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:09:03.547092  199649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:09:03.679913  199649 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:09:03.666685348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:09:03.680020  199649 docker.go:318] overlay module found
	I1013 22:09:03.683101  199649 out.go:179] * Using the docker driver based on user configuration
	I1013 22:09:03.685936  199649 start.go:305] selected driver: docker
	I1013 22:09:03.685955  199649 start.go:925] validating driver "docker" against <nil>
	I1013 22:09:03.685975  199649 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:09:03.686693  199649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:09:03.802940  199649 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:09:03.788881112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:09:03.803108  199649 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:09:03.803340  199649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:09:03.806408  199649 out.go:179] * Using Docker driver with root privileges
	I1013 22:09:03.809790  199649 cni.go:84] Creating CNI manager for ""
	I1013 22:09:03.809862  199649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:09:03.809877  199649 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:09:03.809953  199649 start.go:349] cluster config:
	{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:09:03.813429  199649 out.go:179] * Starting "default-k8s-diff-port-007533" primary control-plane node in "default-k8s-diff-port-007533" cluster
	I1013 22:09:03.816650  199649 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:09:03.825950  199649 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:09:03.829768  199649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:09:03.829834  199649 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:09:03.829845  199649 cache.go:58] Caching tarball of preloaded images
	I1013 22:09:03.829935  199649 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:09:03.829944  199649 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:09:03.830055  199649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:09:03.830074  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json: {Name:mk8bcd3b0fcb3205d620b2adb470d3840baeacbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:03.830231  199649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:09:03.856173  199649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:09:03.856198  199649 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:09:03.856212  199649 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:09:03.856233  199649 start.go:360] acquireMachinesLock for default-k8s-diff-port-007533: {Name:mk990b5defb290df24f36fb536d48d3275652286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:09:03.856321  199649 start.go:364] duration metric: took 73.557µs to acquireMachinesLock for "default-k8s-diff-port-007533"
	I1013 22:09:03.856361  199649 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:09:03.856434  199649 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:09:05.260311  196707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.991849675s)
	I1013 22:09:05.260380  196707 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.9516229s)
	I1013 22:09:05.260415  196707 node_ready.go:35] waiting up to 6m0s for node "embed-certs-251758" to be "Ready" ...
	I1013 22:09:05.260719  196707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.81971968s)
	I1013 22:09:05.326803  196707 node_ready.go:49] node "embed-certs-251758" is "Ready"
	I1013 22:09:05.326878  196707 node_ready.go:38] duration metric: took 66.445016ms for node "embed-certs-251758" to be "Ready" ...
	I1013 22:09:05.326906  196707 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:09:05.326988  196707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:09:05.429527  196707 api_server.go:72] duration metric: took 8.624157877s to wait for apiserver process to appear ...
	I1013 22:09:05.429547  196707 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:09:05.429564  196707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:09:05.429920  196707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.53572732s)
	I1013 22:09:05.433086  196707 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-251758 addons enable metrics-server
	
	I1013 22:09:05.436023  196707 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1013 22:09:05.439902  196707 addons.go:514] duration metric: took 8.634258733s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1013 22:09:05.440214  196707 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:09:05.441480  196707 api_server.go:141] control plane version: v1.34.1
	I1013 22:09:05.441498  196707 api_server.go:131] duration metric: took 11.945217ms to wait for apiserver health ...
	I1013 22:09:05.441505  196707 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:09:05.446368  196707 system_pods.go:59] 8 kube-system pods found
	I1013 22:09:05.446452  196707 system_pods.go:61] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:09:05.446480  196707 system_pods.go:61] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:09:05.446501  196707 system_pods.go:61] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:09:05.446545  196707 system_pods.go:61] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:09:05.446568  196707 system_pods.go:61] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:09:05.446598  196707 system_pods.go:61] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:09:05.446619  196707 system_pods.go:61] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:09:05.446645  196707 system_pods.go:61] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Running
	I1013 22:09:05.446673  196707 system_pods.go:74] duration metric: took 5.161551ms to wait for pod list to return data ...
	I1013 22:09:05.446694  196707 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:09:05.451335  196707 default_sa.go:45] found service account: "default"
	I1013 22:09:05.451391  196707 default_sa.go:55] duration metric: took 4.676198ms for default service account to be created ...
	I1013 22:09:05.451424  196707 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:09:05.455224  196707 system_pods.go:86] 8 kube-system pods found
	I1013 22:09:05.455300  196707 system_pods.go:89] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:09:05.455327  196707 system_pods.go:89] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:09:05.455349  196707 system_pods.go:89] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:09:05.455389  196707 system_pods.go:89] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:09:05.455410  196707 system_pods.go:89] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:09:05.455429  196707 system_pods.go:89] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:09:05.455458  196707 system_pods.go:89] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:09:05.455494  196707 system_pods.go:89] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Running
	I1013 22:09:05.455516  196707 system_pods.go:126] duration metric: took 4.073087ms to wait for k8s-apps to be running ...
	I1013 22:09:05.455548  196707 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:09:05.455626  196707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:09:05.472455  196707 system_svc.go:56] duration metric: took 16.900529ms WaitForService to wait for kubelet
	I1013 22:09:05.472482  196707 kubeadm.go:586] duration metric: took 8.667117989s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:09:05.472501  196707 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:09:05.477912  196707 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:09:05.477977  196707 node_conditions.go:123] node cpu capacity is 2
	I1013 22:09:05.477990  196707 node_conditions.go:105] duration metric: took 5.484464ms to run NodePressure ...
	I1013 22:09:05.478019  196707 start.go:241] waiting for startup goroutines ...
	I1013 22:09:05.478034  196707 start.go:246] waiting for cluster config update ...
	I1013 22:09:05.478046  196707 start.go:255] writing updated cluster config ...
	I1013 22:09:05.478344  196707 ssh_runner.go:195] Run: rm -f paused
	I1013 22:09:05.482368  196707 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:09:05.547321  196707 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkbv8" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:09:07.569788  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:03.860187  199649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:09:03.860416  199649 start.go:159] libmachine.API.Create for "default-k8s-diff-port-007533" (driver="docker")
	I1013 22:09:03.860454  199649 client.go:168] LocalClient.Create starting
	I1013 22:09:03.860531  199649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:09:03.860563  199649 main.go:141] libmachine: Decoding PEM data...
	I1013 22:09:03.860576  199649 main.go:141] libmachine: Parsing certificate...
	I1013 22:09:03.860625  199649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:09:03.860641  199649 main.go:141] libmachine: Decoding PEM data...
	I1013 22:09:03.860656  199649 main.go:141] libmachine: Parsing certificate...
	I1013 22:09:03.861003  199649 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:09:03.881595  199649 cli_runner.go:211] docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:09:03.881663  199649 network_create.go:284] running [docker network inspect default-k8s-diff-port-007533] to gather additional debugging logs...
	I1013 22:09:03.881679  199649 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533
	W1013 22:09:03.899761  199649 cli_runner.go:211] docker network inspect default-k8s-diff-port-007533 returned with exit code 1
	I1013 22:09:03.899925  199649 network_create.go:287] error running [docker network inspect default-k8s-diff-port-007533]: docker network inspect default-k8s-diff-port-007533: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-007533 not found
	I1013 22:09:03.899943  199649 network_create.go:289] output of [docker network inspect default-k8s-diff-port-007533]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-007533 not found
	
	** /stderr **
	I1013 22:09:03.900031  199649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:09:03.936114  199649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:09:03.936602  199649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:09:03.936929  199649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:09:03.937308  199649 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dd0c0}
	I1013 22:09:03.937332  199649 network_create.go:124] attempt to create docker network default-k8s-diff-port-007533 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:09:03.937382  199649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 default-k8s-diff-port-007533
	I1013 22:09:04.033353  199649 network_create.go:108] docker network default-k8s-diff-port-007533 192.168.76.0/24 created
	I1013 22:09:04.033383  199649 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-007533" container
	I1013 22:09:04.033478  199649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:09:04.056834  199649 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-007533 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:09:04.087398  199649 oci.go:103] Successfully created a docker volume default-k8s-diff-port-007533
	I1013 22:09:04.087485  199649 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-007533-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --entrypoint /usr/bin/test -v default-k8s-diff-port-007533:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:09:04.822560  199649 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-007533
	I1013 22:09:04.822609  199649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:09:04.822627  199649 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:09:04.822697  199649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-007533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 22:09:10.056067  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:12.553823  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:09.541636  199649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-007533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.718899317s)
	I1013 22:09:09.541679  199649 kic.go:203] duration metric: took 4.719047062s to extract preloaded images to volume ...
	W1013 22:09:09.541807  199649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:09:09.541920  199649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:09:09.629293  199649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-007533 --name default-k8s-diff-port-007533 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --network default-k8s-diff-port-007533 --ip 192.168.76.2 --volume default-k8s-diff-port-007533:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:09:10.040070  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Running}}
	I1013 22:09:10.074941  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:10.119574  199649 cli_runner.go:164] Run: docker exec default-k8s-diff-port-007533 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:09:10.191181  199649 oci.go:144] the created container "default-k8s-diff-port-007533" has a running status.
	I1013 22:09:10.191226  199649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa...
	I1013 22:09:10.393941  199649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:09:10.420585  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:10.445066  199649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:09:10.445084  199649 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-007533 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:09:10.521101  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:10.554881  199649 machine.go:93] provisionDockerMachine start ...
	I1013 22:09:10.554986  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:10.590478  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:10.590892  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:10.590903  199649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:09:10.591805  199649 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:09:13.743321  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:09:13.743348  199649 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-007533"
	I1013 22:09:13.743420  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:13.763181  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:13.763502  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:13.763520  199649 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-007533 && echo "default-k8s-diff-port-007533" | sudo tee /etc/hostname
	I1013 22:09:13.936593  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:09:13.936688  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:13.962507  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:13.962816  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:13.962840  199649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-007533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-007533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-007533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:09:14.124242  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:09:14.124270  199649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:09:14.124300  199649 ubuntu.go:190] setting up certificates
	I1013 22:09:14.124311  199649 provision.go:84] configureAuth start
	I1013 22:09:14.124392  199649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:09:14.146677  199649 provision.go:143] copyHostCerts
	I1013 22:09:14.146755  199649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:09:14.146770  199649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:09:14.146843  199649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:09:14.146944  199649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:09:14.146956  199649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:09:14.146986  199649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:09:14.147079  199649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:09:14.147090  199649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:09:14.147122  199649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:09:14.147191  199649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-007533 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-007533 localhost minikube]
	I1013 22:09:14.595539  199649 provision.go:177] copyRemoteCerts
	I1013 22:09:14.595607  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:09:14.595654  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:14.613553  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:14.725199  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:09:14.749991  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 22:09:14.782421  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:09:14.803414  199649 provision.go:87] duration metric: took 679.079133ms to configureAuth
	I1013 22:09:14.803440  199649 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:09:14.803624  199649 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:09:14.803733  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:14.828135  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:14.828461  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:14.828477  199649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:09:15.214101  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:09:15.214126  199649 machine.go:96] duration metric: took 4.65922273s to provisionDockerMachine
	I1013 22:09:15.214136  199649 client.go:171] duration metric: took 11.353676095s to LocalClient.Create
	I1013 22:09:15.214149  199649 start.go:167] duration metric: took 11.353733998s to libmachine.API.Create "default-k8s-diff-port-007533"
	I1013 22:09:15.214157  199649 start.go:293] postStartSetup for "default-k8s-diff-port-007533" (driver="docker")
	I1013 22:09:15.214166  199649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:09:15.214230  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:09:15.214272  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.240735  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.352907  199649 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:09:15.358623  199649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:09:15.358649  199649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:09:15.358659  199649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:09:15.358718  199649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:09:15.358807  199649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:09:15.358928  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:09:15.370340  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:09:15.395099  199649 start.go:296] duration metric: took 180.928266ms for postStartSetup
	I1013 22:09:15.395455  199649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:09:15.419073  199649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:09:15.419341  199649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:09:15.419397  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.456275  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.569866  199649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:09:15.580944  199649 start.go:128] duration metric: took 11.724485747s to createHost
	I1013 22:09:15.581005  199649 start.go:83] releasing machines lock for "default-k8s-diff-port-007533", held for 11.724659461s
	I1013 22:09:15.581126  199649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:09:15.610485  199649 ssh_runner.go:195] Run: cat /version.json
	I1013 22:09:15.610538  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.610767  199649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:09:15.610827  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.641289  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.652036  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.755572  199649 ssh_runner.go:195] Run: systemctl --version
	I1013 22:09:15.881521  199649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:09:15.966142  199649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:09:15.971548  199649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:09:15.971691  199649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:09:16.014737  199649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:09:16.014760  199649 start.go:495] detecting cgroup driver to use...
	I1013 22:09:16.014804  199649 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:09:16.014870  199649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:09:16.047094  199649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:09:16.065227  199649 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:09:16.065301  199649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:09:16.093972  199649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:09:16.133005  199649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:09:16.336622  199649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:09:16.573046  199649 docker.go:234] disabling docker service ...
	I1013 22:09:16.573137  199649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:09:16.602629  199649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:09:16.622518  199649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:09:16.808556  199649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:09:16.974085  199649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:09:16.989507  199649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:09:17.007548  199649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:09:17.007669  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.019897  199649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:09:17.020018  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.035961  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.052571  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.066966  199649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:09:17.077448  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.091484  199649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.108939  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.120668  199649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:09:17.128801  199649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:09:17.137065  199649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:09:17.298582  199649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:09:17.770547  199649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:09:17.770631  199649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:09:17.775155  199649 start.go:563] Will wait 60s for crictl version
	I1013 22:09:17.775241  199649 ssh_runner.go:195] Run: which crictl
	I1013 22:09:17.779462  199649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:09:17.818255  199649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:09:17.818358  199649 ssh_runner.go:195] Run: crio --version
	I1013 22:09:17.853475  199649 ssh_runner.go:195] Run: crio --version
	I1013 22:09:17.892899  199649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1013 22:09:14.555117  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:16.566759  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:17.895814  199649 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:09:17.920962  199649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:09:17.926325  199649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:09:17.939530  199649 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:09:17.939641  199649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:09:17.939712  199649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:09:17.986627  199649 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:09:17.986649  199649 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:09:17.986709  199649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:09:18.028809  199649 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:09:18.028838  199649 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:09:18.028847  199649 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1013 22:09:18.028937  199649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-007533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:09:18.029039  199649 ssh_runner.go:195] Run: crio config
	I1013 22:09:18.122252  199649 cni.go:84] Creating CNI manager for ""
	I1013 22:09:18.122326  199649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:09:18.122363  199649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:09:18.122417  199649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-007533 NodeName:default-k8s-diff-port-007533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:09:18.122588  199649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-007533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:09:18.122714  199649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:09:18.131150  199649 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:09:18.131286  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:09:18.140488  199649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 22:09:18.153807  199649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:09:18.167268  199649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 22:09:18.180826  199649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:09:18.184916  199649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:09:18.196385  199649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:09:18.354101  199649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:09:18.378942  199649 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533 for IP: 192.168.76.2
	I1013 22:09:18.379017  199649 certs.go:195] generating shared ca certs ...
	I1013 22:09:18.379058  199649 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:18.379269  199649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:09:18.379379  199649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:09:18.379405  199649 certs.go:257] generating profile certs ...
	I1013 22:09:18.379498  199649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key
	I1013 22:09:18.379550  199649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt with IP's: []
	I1013 22:09:19.042459  199649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt ...
	I1013 22:09:19.042543  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt: {Name:mk33cf6d21f8105402a719fcdcb5867dc8ff2024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:19.042756  199649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key ...
	I1013 22:09:19.042801  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key: {Name:mk8316bdc8d74f8e5a75398eb7d2e1bb2e8dfe2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:19.042951  199649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38
	I1013 22:09:19.043007  199649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:09:20.188334  199649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38 ...
	I1013 22:09:20.188412  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38: {Name:mkb81c2077e2bb51c5a0173e098c8c755a5cb4fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.188644  199649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38 ...
	I1013 22:09:20.188679  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38: {Name:mkd5ea86d3de7ff66b00960144398820cd664590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.188844  199649 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt
	I1013 22:09:20.188988  199649 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key
	I1013 22:09:20.189113  199649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key
	I1013 22:09:20.189163  199649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt with IP's: []
	I1013 22:09:20.478778  199649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt ...
	I1013 22:09:20.478810  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt: {Name:mk8007471a17e56e105af0084e17752cf6a507e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.479003  199649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key ...
	I1013 22:09:20.479022  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key: {Name:mkfa98cdbeff2a61ed92491341fe1447d5b86687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.479270  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:09:20.479320  199649 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:09:20.479335  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:09:20.479362  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:09:20.479389  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:09:20.479414  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:09:20.479465  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:09:20.480084  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:09:20.502086  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:09:20.526965  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:09:20.546548  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:09:20.567316  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 22:09:20.585629  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:09:20.611386  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:09:20.633444  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:09:20.656943  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:09:20.675327  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:09:20.697653  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:09:20.719000  199649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:09:20.735174  199649 ssh_runner.go:195] Run: openssl version
	I1013 22:09:20.741994  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:09:20.750702  199649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:09:20.756308  199649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:09:20.756451  199649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:09:20.813386  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:09:20.824030  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:09:20.838333  199649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:09:20.844242  199649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:09:20.844368  199649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:09:20.895701  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:09:20.905749  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:09:20.914275  199649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:09:20.917906  199649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:09:20.917981  199649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:09:20.961025  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:09:20.970904  199649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:09:20.975201  199649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:09:20.975260  199649 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:09:20.975336  199649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:09:20.975403  199649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:09:21.007240  199649 cri.go:89] found id: ""
	I1013 22:09:21.007327  199649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:09:21.015525  199649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:09:21.023842  199649 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:09:21.023951  199649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:09:21.031756  199649 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:09:21.031802  199649 kubeadm.go:157] found existing configuration files:
	
	I1013 22:09:21.031872  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 22:09:21.039648  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:09:21.039713  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:09:21.047356  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 22:09:21.055848  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:09:21.055931  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:09:21.063426  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 22:09:21.071198  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:09:21.071281  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:09:21.078499  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 22:09:21.086083  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:09:21.086166  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:09:21.093634  199649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:09:21.136701  199649 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:09:21.137018  199649 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:09:21.159000  199649 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:09:21.159102  199649 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:09:21.159162  199649 kubeadm.go:318] OS: Linux
	I1013 22:09:21.159240  199649 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:09:21.159320  199649 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:09:21.159397  199649 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:09:21.159471  199649 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:09:21.159538  199649 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:09:21.159619  199649 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:09:21.159690  199649 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:09:21.159770  199649 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:09:21.159870  199649 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:09:21.229918  199649 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:09:21.230082  199649 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:09:21.230211  199649 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:09:21.239894  199649 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1013 22:09:19.062822  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:21.553722  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:21.245520  199649 out.go:252]   - Generating certificates and keys ...
	I1013 22:09:21.245689  199649 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:09:21.245798  199649 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:09:21.307735  199649 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:09:21.595059  199649 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:09:22.116884  199649 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:09:22.672125  199649 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:09:22.894002  199649 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:09:22.894642  199649 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-007533 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:09:23.301426  199649 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:09:23.301749  199649 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-007533 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:09:23.798005  199649 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:09:24.494536  199649 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:09:24.750411  199649 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:09:24.750696  199649 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:09:25.111608  199649 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:09:25.797923  199649 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:09:26.030538  199649 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:09:26.432338  199649 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:09:27.284922  199649 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:09:27.286141  199649 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:09:27.290801  199649 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1013 22:09:23.554395  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:26.053777  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:27.294415  199649 out.go:252]   - Booting up control plane ...
	I1013 22:09:27.294522  199649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:09:27.294610  199649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:09:27.295443  199649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:09:27.311582  199649 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:09:27.312090  199649 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:09:27.319974  199649 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:09:27.320569  199649 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:09:27.320652  199649 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:09:27.453725  199649 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:09:27.453850  199649 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:09:28.455486  199649 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001664549s
	I1013 22:09:28.461079  199649 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:09:28.461249  199649 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1013 22:09:28.461389  199649 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:09:28.461511  199649 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1013 22:09:28.055207  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:30.552440  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:32.554665  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:31.980123  199649 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.518631378s
	I1013 22:09:34.147447  199649 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.686381192s
	I1013 22:09:35.463637  199649 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002441818s
	I1013 22:09:35.491584  199649 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:09:35.505944  199649 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:09:35.523310  199649 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:09:35.523519  199649 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-007533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:09:35.539808  199649 kubeadm.go:318] [bootstrap-token] Using token: 7ed21a.d18grr5is41jwd6z
	I1013 22:09:35.542738  199649 out.go:252]   - Configuring RBAC rules ...
	I1013 22:09:35.542868  199649 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:09:35.552995  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:09:35.562078  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:09:35.568725  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:09:35.572819  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:09:35.577281  199649 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:09:35.871613  199649 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:09:36.316288  199649 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:09:36.873935  199649 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:09:36.875241  199649 kubeadm.go:318] 
	I1013 22:09:36.875326  199649 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:09:36.875340  199649 kubeadm.go:318] 
	I1013 22:09:36.875422  199649 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:09:36.875427  199649 kubeadm.go:318] 
	I1013 22:09:36.875511  199649 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:09:36.875579  199649 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:09:36.875636  199649 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:09:36.875646  199649 kubeadm.go:318] 
	I1013 22:09:36.875717  199649 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:09:36.875727  199649 kubeadm.go:318] 
	I1013 22:09:36.875821  199649 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:09:36.875837  199649 kubeadm.go:318] 
	I1013 22:09:36.875892  199649 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:09:36.875993  199649 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:09:36.876064  199649 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:09:36.876072  199649 kubeadm.go:318] 
	I1013 22:09:36.876156  199649 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:09:36.876236  199649 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:09:36.876244  199649 kubeadm.go:318] 
	I1013 22:09:36.876329  199649 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 7ed21a.d18grr5is41jwd6z \
	I1013 22:09:36.876441  199649 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:09:36.876464  199649 kubeadm.go:318] 	--control-plane 
	I1013 22:09:36.876468  199649 kubeadm.go:318] 
	I1013 22:09:36.876552  199649 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:09:36.876559  199649 kubeadm.go:318] 
	I1013 22:09:36.876640  199649 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 7ed21a.d18grr5is41jwd6z \
	I1013 22:09:36.876742  199649 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:09:36.882004  199649 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:09:36.882262  199649 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:09:36.882377  199649 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:09:36.882396  199649 cni.go:84] Creating CNI manager for ""
	I1013 22:09:36.882404  199649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:09:36.885448  199649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 22:09:35.053369  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:37.054260  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:36.888385  199649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:09:36.892948  199649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:09:36.892972  199649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:09:36.911274  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:09:37.235707  199649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:09:37.235892  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:37.235965  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-007533 minikube.k8s.io/updated_at=2025_10_13T22_09_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=default-k8s-diff-port-007533 minikube.k8s.io/primary=true
	I1013 22:09:37.256420  199649 ops.go:34] apiserver oom_adj: -16
	I1013 22:09:37.452048  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:37.952622  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:38.452228  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:38.054543  196707 pod_ready.go:94] pod "coredns-66bc5c9577-gkbv8" is "Ready"
	I1013 22:09:38.054568  196707 pod_ready.go:86] duration metric: took 32.507218861s for pod "coredns-66bc5c9577-gkbv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.057916  196707 pod_ready.go:83] waiting for pod "etcd-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.063530  196707 pod_ready.go:94] pod "etcd-embed-certs-251758" is "Ready"
	I1013 22:09:38.063563  196707 pod_ready.go:86] duration metric: took 5.571223ms for pod "etcd-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.080104  196707 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.086191  196707 pod_ready.go:94] pod "kube-apiserver-embed-certs-251758" is "Ready"
	I1013 22:09:38.086276  196707 pod_ready.go:86] duration metric: took 6.073791ms for pod "kube-apiserver-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.089380  196707 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.251467  196707 pod_ready.go:94] pod "kube-controller-manager-embed-certs-251758" is "Ready"
	I1013 22:09:38.251536  196707 pod_ready.go:86] duration metric: took 162.080852ms for pod "kube-controller-manager-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.451599  196707 pod_ready.go:83] waiting for pod "kube-proxy-nmmdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.851419  196707 pod_ready.go:94] pod "kube-proxy-nmmdh" is "Ready"
	I1013 22:09:38.851447  196707 pod_ready.go:86] duration metric: took 399.82337ms for pod "kube-proxy-nmmdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:39.051354  196707 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:39.451489  196707 pod_ready.go:94] pod "kube-scheduler-embed-certs-251758" is "Ready"
	I1013 22:09:39.451515  196707 pod_ready.go:86] duration metric: took 400.132982ms for pod "kube-scheduler-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:39.451527  196707 pod_ready.go:40] duration metric: took 33.969085594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:09:39.526554  196707 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:09:39.529960  196707 out.go:179] * Done! kubectl is now configured to use "embed-certs-251758" cluster and "default" namespace by default
	I1013 22:09:38.952847  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:39.452148  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:39.953012  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:40.452802  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:40.952719  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:41.452994  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:41.596275  199649 kubeadm.go:1113] duration metric: took 4.360428839s to wait for elevateKubeSystemPrivileges
	I1013 22:09:41.596312  199649 kubeadm.go:402] duration metric: took 20.62105565s to StartCluster
	I1013 22:09:41.596330  199649 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:41.596435  199649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:09:41.598040  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:41.598285  199649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:09:41.598597  199649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:09:41.598979  199649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:09:41.599074  199649 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-007533"
	I1013 22:09:41.599104  199649 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-007533"
	I1013 22:09:41.599146  199649 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:09:41.599186  199649 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:09:41.599253  199649 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-007533"
	I1013 22:09:41.599270  199649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-007533"
	I1013 22:09:41.599723  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:41.599833  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:41.602623  199649 out.go:179] * Verifying Kubernetes components...
	I1013 22:09:41.606117  199649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:09:41.643340  199649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:09:41.645219  199649 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-007533"
	I1013 22:09:41.645257  199649 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:09:41.645670  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:41.646510  199649 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:09:41.646531  199649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:09:41.646575  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:41.688788  199649 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:09:41.688808  199649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:09:41.688868  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:41.690483  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:41.722569  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:41.947244  199649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:09:41.947413  199649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:09:42.022269  199649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:09:42.116206  199649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:09:42.589157  199649 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 22:09:42.590373  199649 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:09:42.842176  199649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:09:42.845070  199649 addons.go:514] duration metric: took 1.246076078s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:09:43.094196  199649 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-007533" context rescaled to 1 replicas
	W1013 22:09:44.594978  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:09:47.093226  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:09:49.093448  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:09:51.098093  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.591285309Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e2174865-3332-4da8-a2c2-e26978eabb85 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.592249214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3083e23-9da0-4b6a-aa8b-0699a6d19d83 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.593530011Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper" id=e3228a7b-d8a5-402b-ac68-39d9768e70a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.593775215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.600694211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.601333062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.617527877Z" level=info msg="Created container c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper" id=e3228a7b-d8a5-402b-ac68-39d9768e70a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.618700705Z" level=info msg="Starting container: c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae" id=85b55705-e3ef-4789-9e16-ad8accb88a0f name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.620316138Z" level=info msg="Started container" PID=1671 containerID=c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper id=85b55705-e3ef-4789-9e16-ad8accb88a0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806
	Oct 13 22:09:40 embed-certs-251758 conmon[1669]: conmon c3551818d9baeddb52ce <ninfo>: container 1671 exited with status 1
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.834019675Z" level=info msg="Removing container: 04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60" id=75c67bc5-66f8-490c-84ed-4ee3a6776f30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.841396776Z" level=info msg="Error loading conmon cgroup of container 04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60: cgroup deleted" id=75c67bc5-66f8-490c-84ed-4ee3a6776f30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.845647803Z" level=info msg="Removed container 04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper" id=75c67bc5-66f8-490c-84ed-4ee3a6776f30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.934021685Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.938394596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.938427678Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.938454204Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.941507222Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.941539058Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.941560596Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.944558263Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.944590894Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.944610069Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.947515973Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.947552895Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c3551818d9bae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   9203e0a94da15       dashboard-metrics-scraper-6ffb444bf9-mjklg   kubernetes-dashboard
	6fc9ddef880b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   91e42ae8a15bc       storage-provisioner                          kube-system
	a7b4e396ad98c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   7efc3209ecbfb       kubernetes-dashboard-855c9754f9-txgzm        kubernetes-dashboard
	867161f640012       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   e24cef4c37481       coredns-66bc5c9577-gkbv8                     kube-system
	f0db61f244882       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   a924d4dd12986       busybox                                      default
	79d9d302bbf1a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   13d73902d02de       kindnet-csh4p                                kube-system
	6d3d93734554a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   91e42ae8a15bc       storage-provisioner                          kube-system
	90dfa1eb353c5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   38f259a1a8743       kube-proxy-nmmdh                             kube-system
	aae750af84a55       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   c957600faac77       kube-apiserver-embed-certs-251758            kube-system
	584a98c7ea440       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   1740c9f3bca64       etcd-embed-certs-251758                      kube-system
	9c6989f62c117       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   9326b20e3faad       kube-controller-manager-embed-certs-251758   kube-system
	5f76aa65f805b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   ba176c67a4639       kube-scheduler-embed-certs-251758            kube-system
	
	
	==> coredns [867161f640012fdfebf1dab5ae2b56691570ae088f99d8eb681cb0a4d8504d85] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37855 - 45940 "HINFO IN 4803929117046417237.4397713878805235932. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023101931s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-251758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-251758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=embed-certs-251758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_07_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:07:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-251758
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:09:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:08:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-251758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 204b64ea2a9c4279bdcc32d1fd6a1957
	  System UUID:                f24253cd-26e9-4717-a721-e240cb5f208d
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-gkbv8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m15s
	  kube-system                 etcd-embed-certs-251758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-csh4p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-251758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-embed-certs-251758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-nmmdh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-251758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mjklg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-txgzm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m13s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s (x8 over 2m28s)  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x8 over 2m28s)  kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x7 over 2m28s)  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m20s                  kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s                  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m20s                  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m17s                  node-controller  Node embed-certs-251758 event: Registered Node embed-certs-251758 in Controller
	  Normal   NodeReady                94s                    kubelet          Node embed-certs-251758 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node embed-certs-251758 event: Registered Node embed-certs-251758 in Controller
	
	
	==> dmesg <==
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [584a98c7ea4404c695d25b77ddef1fab1aca6fa39f58483da8a818a558fb996c] <==
	{"level":"warn","ts":"2025-10-13T22:09:00.222832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.265658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.289465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.315158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.341643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.372713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.397456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.436397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.452838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.499888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.510005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.558486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.588588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.611987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.643916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.677831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.692831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.717594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.756629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.761191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.802994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.828666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.864920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.907518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:01.119786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43692","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:09:54 up  1:52,  0 user,  load average: 3.56, 2.99, 2.33
	Linux embed-certs-251758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79d9d302bbf1a83c8987224c8f4facc00eeb460f55b3bfa8c4bf25cd20012882] <==
	I1013 22:09:04.727896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:09:04.728117       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:09:04.736506       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:09:04.736535       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:09:04.736553       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:09:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:09:04.933679       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:09:04.933753       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:09:04.933796       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:09:04.934506       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:09:34.934255       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:09:34.934475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:09:34.934631       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 22:09:34.934693       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1013 22:09:36.533989       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:09:36.534086       1 metrics.go:72] Registering metrics
	I1013 22:09:36.534180       1 controller.go:711] "Syncing nftables rules"
	I1013 22:09:44.933731       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:09:44.933782       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aae750af84a55af314225c0685c8ae60d5e9a75591e1edfaf24d63b4ef9dacec] <==
	I1013 22:09:02.899303       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:09:02.899537       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 22:09:02.899967       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:09:02.908000       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:09:02.908123       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:09:02.908135       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:09:02.908238       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 22:09:02.908280       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:09:02.916563       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:09:02.922852       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:09:02.924376       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:09:02.924420       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:09:02.924581       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 22:09:02.972810       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:09:03.262168       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:09:03.694238       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:09:04.675065       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:09:05.057737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:09:05.163216       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:09:05.211469       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:09:05.386402       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.119.16"}
	I1013 22:09:05.411730       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.245.23"}
	I1013 22:09:07.545131       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:09:07.597083       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:09:07.821695       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9c6989f62c1172b0c0f363d4229f6d6e18f8427d7c917aa15eacb2457bfad0a2] <==
	I1013 22:09:07.349549       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 22:09:07.349576       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 22:09:07.353464       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:09:07.354166       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:09:07.356757       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:09:07.356786       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:09:07.356802       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:09:07.357200       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:09:07.385745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:09:07.385846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:09:07.385876       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:09:07.386134       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:09:07.386932       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:09:07.391985       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:09:07.392002       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:09:07.401729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:09:07.407765       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:09:07.414725       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:09:07.417085       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 22:09:07.421239       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:09:07.421386       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:09:07.421499       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-251758"
	I1013 22:09:07.421573       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:09:07.439896       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:09:07.440004       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [90dfa1eb353c58a620786cbe9e0b45cd92ede40e22ea29e7b93ccc4a41008baf] <==
	I1013 22:09:04.244722       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:09:05.399443       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:09:05.502848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:09:05.502959       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 22:09:05.503082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:09:05.529887       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:09:05.530001       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:09:05.534328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:09:05.534693       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:09:05.534973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:09:05.536574       1 config.go:200] "Starting service config controller"
	I1013 22:09:05.536639       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:09:05.536679       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:09:05.536706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:09:05.536743       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:09:05.536771       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:09:05.540775       1 config.go:309] "Starting node config controller"
	I1013 22:09:05.540847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:09:05.540879       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:09:05.637294       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:09:05.637417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:09:05.637260       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5f76aa65f805b69f7a41cf737a66368820d106aa35a1bd6fad89654cbc4c61aa] <==
	I1013 22:09:01.094350       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:09:04.714378       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:09:04.714412       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:09:04.750093       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:09:04.750202       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:09:04.750221       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:09:04.750246       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:09:04.764590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:09:04.764613       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:09:04.764633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:09:04.764642       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:09:04.850900       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:09:04.867321       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:09:04.867572       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.931620     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d92af1b4-675d-48d6-b1e5-f1e88ecad032-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-txgzm\" (UID: \"d92af1b4-675d-48d6-b1e5-f1e88ecad032\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-txgzm"
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.932431     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdtb\" (UniqueName: \"kubernetes.io/projected/d92af1b4-675d-48d6-b1e5-f1e88ecad032-kube-api-access-dxdtb\") pod \"kubernetes-dashboard-855c9754f9-txgzm\" (UID: \"d92af1b4-675d-48d6-b1e5-f1e88ecad032\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-txgzm"
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.932472     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fd98834a-1072-43cc-8122-18f42b378902-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-mjklg\" (UID: \"fd98834a-1072-43cc-8122-18f42b378902\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg"
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.932508     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xpk\" (UniqueName: \"kubernetes.io/projected/fd98834a-1072-43cc-8122-18f42b378902-kube-api-access-48xpk\") pod \"dashboard-metrics-scraper-6ffb444bf9-mjklg\" (UID: \"fd98834a-1072-43cc-8122-18f42b378902\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg"
	Oct 13 22:09:08 embed-certs-251758 kubelet[773]: W1013 22:09:08.187925     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/crio-7efc3209ecbfb753f685fbb8173b50e12e5797ef32af01645e00eecc112bdc1b WatchSource:0}: Error finding container 7efc3209ecbfb753f685fbb8173b50e12e5797ef32af01645e00eecc112bdc1b: Status 404 returned error can't find the container with id 7efc3209ecbfb753f685fbb8173b50e12e5797ef32af01645e00eecc112bdc1b
	Oct 13 22:09:08 embed-certs-251758 kubelet[773]: W1013 22:09:08.212710     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/crio-9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806 WatchSource:0}: Error finding container 9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806: Status 404 returned error can't find the container with id 9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806
	Oct 13 22:09:14 embed-certs-251758 kubelet[773]: I1013 22:09:14.777127     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-txgzm" podStartSLOduration=1.293617922 podStartE2EDuration="7.777110104s" podCreationTimestamp="2025-10-13 22:09:07 +0000 UTC" firstStartedPulling="2025-10-13 22:09:08.191573462 +0000 UTC m=+12.913246793" lastFinishedPulling="2025-10-13 22:09:14.675065645 +0000 UTC m=+19.396738975" observedRunningTime="2025-10-13 22:09:14.776374763 +0000 UTC m=+19.498048102" watchObservedRunningTime="2025-10-13 22:09:14.777110104 +0000 UTC m=+19.498783435"
	Oct 13 22:09:21 embed-certs-251758 kubelet[773]: I1013 22:09:21.778241     773 scope.go:117] "RemoveContainer" containerID="b047f341f317c0f7e2a55e121f7ea44caf1d0f6d3f3b5d3696b6cb8d77ea4971"
	Oct 13 22:09:22 embed-certs-251758 kubelet[773]: I1013 22:09:22.782428     773 scope.go:117] "RemoveContainer" containerID="b047f341f317c0f7e2a55e121f7ea44caf1d0f6d3f3b5d3696b6cb8d77ea4971"
	Oct 13 22:09:22 embed-certs-251758 kubelet[773]: I1013 22:09:22.783386     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:22 embed-certs-251758 kubelet[773]: E1013 22:09:22.783627     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:23 embed-certs-251758 kubelet[773]: I1013 22:09:23.787035     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:23 embed-certs-251758 kubelet[773]: E1013 22:09:23.787765     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:28 embed-certs-251758 kubelet[773]: I1013 22:09:28.138280     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:28 embed-certs-251758 kubelet[773]: E1013 22:09:28.138974     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:34 embed-certs-251758 kubelet[773]: I1013 22:09:34.813121     773 scope.go:117] "RemoveContainer" containerID="6d3d93734554a1ed7c0be2214b61d891319ff15030b44abcfa42a8cad20a6268"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: I1013 22:09:40.590724     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: I1013 22:09:40.830961     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: I1013 22:09:40.831258     773 scope.go:117] "RemoveContainer" containerID="c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: E1013 22:09:40.831416     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:48 embed-certs-251758 kubelet[773]: I1013 22:09:48.138051     773 scope.go:117] "RemoveContainer" containerID="c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae"
	Oct 13 22:09:48 embed-certs-251758 kubelet[773]: E1013 22:09:48.138662     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:51 embed-certs-251758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:09:51 embed-certs-251758 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:09:51 embed-certs-251758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a7b4e396ad98c890c97a0efa0e70d34ea49729a1e195184cb861210865588c8c] <==
	2025/10/13 22:09:14 Using namespace: kubernetes-dashboard
	2025/10/13 22:09:14 Using in-cluster config to connect to apiserver
	2025/10/13 22:09:14 Using secret token for csrf signing
	2025/10/13 22:09:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:09:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:09:14 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:09:14 Generating JWE encryption key
	2025/10/13 22:09:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:09:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:09:16 Initializing JWE encryption key from synchronized object
	2025/10/13 22:09:16 Creating in-cluster Sidecar client
	2025/10/13 22:09:16 Serving insecurely on HTTP port: 9090
	2025/10/13 22:09:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:09:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:09:14 Starting overwatch
	
	
	==> storage-provisioner [6d3d93734554a1ed7c0be2214b61d891319ff15030b44abcfa42a8cad20a6268] <==
	I1013 22:09:04.534658       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:09:34.537160       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6fc9ddef880b60d90c1173448c140aac193f748165971840fb9a1cfdc4aa1d70] <==
	I1013 22:09:34.898548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:09:34.923920       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:09:34.924048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:09:34.926658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:38.382170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:42.642516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:46.240634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:49.294100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:52.316179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:52.321180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:09:52.321317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:09:52.321486       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-251758_f4e0248b-9f45-4ac8-a76a-d608d0fcc10a!
	I1013 22:09:52.322341       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56f6d001-d20d-4495-ba7d-2f8ddd8e7ade", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-251758_f4e0248b-9f45-4ac8-a76a-d608d0fcc10a became leader
	W1013 22:09:52.329965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:52.353466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:09:52.422563       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-251758_f4e0248b-9f45-4ac8-a76a-d608d0fcc10a!
	W1013 22:09:54.356447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:54.362246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-251758 -n embed-certs-251758
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-251758 -n embed-certs-251758: exit status 2 (379.485275ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-251758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-251758
helpers_test.go:243: (dbg) docker inspect embed-certs-251758:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396",
	        "Created": "2025-10-13T22:07:07.277688258Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196834,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:08:48.305179534Z",
	            "FinishedAt": "2025-10-13T22:08:47.488726439Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/hostname",
	        "HostsPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/hosts",
	        "LogPath": "/var/lib/docker/containers/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396-json.log",
	        "Name": "/embed-certs-251758",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-251758:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-251758",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396",
	                "LowerDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6627eb940d8f167382df1d3afa375f7fb85691aca9adf9e1dbdcb28e949b9a84/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-251758",
	                "Source": "/var/lib/docker/volumes/embed-certs-251758/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-251758",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-251758",
	                "name.minikube.sigs.k8s.io": "embed-certs-251758",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8db7f867bf1bf3a517399cdce59e6ef9a51b677c5e79cf783a537b6b9f9db3a8",
	            "SandboxKey": "/var/run/docker/netns/8db7f867bf1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-251758": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:96:18:34:21:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b9096ba29d296c438f9a557fd2db13e4e114de39426eb54481a5b79f96f151ea",
	                    "EndpointID": "bc1f1b45dd592e3a0fbf5c10346635ab9dbadfa55039de5e214ec4139090d231",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-251758",
	                        "bce2b62de8b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758: exit status 2 (342.086568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-251758 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-251758 logs -n 25: (1.267124054s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-546667       │ jenkins │ v1.37.0 │ 13 Oct 25 22:05 UTC │ 13 Oct 25 22:06 UTC │
	│ delete  │ -p cert-expiration-546667                                                                                                                                                                                                                     │ cert-expiration-546667       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ stop    │ -p no-preload-998398 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ addons  │ enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:09:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:09:03.469276  199649 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:09:03.469509  199649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:09:03.469538  199649 out.go:374] Setting ErrFile to fd 2...
	I1013 22:09:03.469566  199649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:09:03.469862  199649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:09:03.470350  199649 out.go:368] Setting JSON to false
	I1013 22:09:03.471387  199649 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6678,"bootTime":1760386666,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:09:03.471482  199649 start.go:141] virtualization:  
	I1013 22:09:03.475167  199649 out.go:179] * [default-k8s-diff-port-007533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:09:03.478180  199649 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:09:03.478252  199649 notify.go:220] Checking for updates...
	I1013 22:09:03.484390  199649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:09:03.487398  199649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:09:03.490440  199649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:09:03.493889  199649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:09:03.496797  199649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:09:03.500281  199649 config.go:182] Loaded profile config "embed-certs-251758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:09:03.500460  199649 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:09:03.546966  199649 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:09:03.547092  199649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:09:03.679913  199649 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:09:03.666685348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:09:03.680020  199649 docker.go:318] overlay module found
	I1013 22:09:03.683101  199649 out.go:179] * Using the docker driver based on user configuration
	I1013 22:09:03.685936  199649 start.go:305] selected driver: docker
	I1013 22:09:03.685955  199649 start.go:925] validating driver "docker" against <nil>
	I1013 22:09:03.685975  199649 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:09:03.686693  199649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:09:03.802940  199649 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:09:03.788881112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:09:03.803108  199649 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:09:03.803340  199649 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:09:03.806408  199649 out.go:179] * Using Docker driver with root privileges
	I1013 22:09:03.809790  199649 cni.go:84] Creating CNI manager for ""
	I1013 22:09:03.809862  199649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:09:03.809877  199649 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:09:03.809953  199649 start.go:349] cluster config:
	{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:09:03.813429  199649 out.go:179] * Starting "default-k8s-diff-port-007533" primary control-plane node in "default-k8s-diff-port-007533" cluster
	I1013 22:09:03.816650  199649 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:09:03.825950  199649 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:09:03.829768  199649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:09:03.829834  199649 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:09:03.829845  199649 cache.go:58] Caching tarball of preloaded images
	I1013 22:09:03.829935  199649 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:09:03.829944  199649 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:09:03.830055  199649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:09:03.830074  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json: {Name:mk8bcd3b0fcb3205d620b2adb470d3840baeacbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:03.830231  199649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:09:03.856173  199649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:09:03.856198  199649 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:09:03.856212  199649 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:09:03.856233  199649 start.go:360] acquireMachinesLock for default-k8s-diff-port-007533: {Name:mk990b5defb290df24f36fb536d48d3275652286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:09:03.856321  199649 start.go:364] duration metric: took 73.557µs to acquireMachinesLock for "default-k8s-diff-port-007533"
	I1013 22:09:03.856361  199649 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:09:03.856434  199649 start.go:125] createHost starting for "" (driver="docker")
	I1013 22:09:05.260311  196707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.991849675s)
	I1013 22:09:05.260380  196707 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.9516229s)
	I1013 22:09:05.260415  196707 node_ready.go:35] waiting up to 6m0s for node "embed-certs-251758" to be "Ready" ...
	I1013 22:09:05.260719  196707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.81971968s)
	I1013 22:09:05.326803  196707 node_ready.go:49] node "embed-certs-251758" is "Ready"
	I1013 22:09:05.326878  196707 node_ready.go:38] duration metric: took 66.445016ms for node "embed-certs-251758" to be "Ready" ...
	I1013 22:09:05.326906  196707 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:09:05.326988  196707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:09:05.429527  196707 api_server.go:72] duration metric: took 8.624157877s to wait for apiserver process to appear ...
	I1013 22:09:05.429547  196707 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:09:05.429564  196707 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:09:05.429920  196707 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.53572732s)
	I1013 22:09:05.433086  196707 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-251758 addons enable metrics-server
	
	I1013 22:09:05.436023  196707 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1013 22:09:05.439902  196707 addons.go:514] duration metric: took 8.634258733s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1013 22:09:05.440214  196707 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:09:05.441480  196707 api_server.go:141] control plane version: v1.34.1
	I1013 22:09:05.441498  196707 api_server.go:131] duration metric: took 11.945217ms to wait for apiserver health ...
	I1013 22:09:05.441505  196707 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:09:05.446368  196707 system_pods.go:59] 8 kube-system pods found
	I1013 22:09:05.446452  196707 system_pods.go:61] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:09:05.446480  196707 system_pods.go:61] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:09:05.446501  196707 system_pods.go:61] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:09:05.446545  196707 system_pods.go:61] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:09:05.446568  196707 system_pods.go:61] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:09:05.446598  196707 system_pods.go:61] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:09:05.446619  196707 system_pods.go:61] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:09:05.446645  196707 system_pods.go:61] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Running
	I1013 22:09:05.446673  196707 system_pods.go:74] duration metric: took 5.161551ms to wait for pod list to return data ...
	I1013 22:09:05.446694  196707 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:09:05.451335  196707 default_sa.go:45] found service account: "default"
	I1013 22:09:05.451391  196707 default_sa.go:55] duration metric: took 4.676198ms for default service account to be created ...
	I1013 22:09:05.451424  196707 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:09:05.455224  196707 system_pods.go:86] 8 kube-system pods found
	I1013 22:09:05.455300  196707 system_pods.go:89] "coredns-66bc5c9577-gkbv8" [ae7b4689-bcb1-4a31-84a2-726b234eceb7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:09:05.455327  196707 system_pods.go:89] "etcd-embed-certs-251758" [a9014fd5-64d9-463c-a4a4-4d640bcdc8ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:09:05.455349  196707 system_pods.go:89] "kindnet-csh4p" [e79c32bc-0bbe-43e8-bbea-ccd4ff075bb7] Running
	I1013 22:09:05.455389  196707 system_pods.go:89] "kube-apiserver-embed-certs-251758" [93ef7029-7dd4-4212-a4b5-49b211fad012] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:09:05.455410  196707 system_pods.go:89] "kube-controller-manager-embed-certs-251758" [056a074f-a27c-4bd8-b72e-c997d1bafdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:09:05.455429  196707 system_pods.go:89] "kube-proxy-nmmdh" [7726987e-433d-4e17-9b95-7c1d46d6a2e3] Running
	I1013 22:09:05.455458  196707 system_pods.go:89] "kube-scheduler-embed-certs-251758" [ed4ecb4e-62c7-4b0b-825a-2f0c75fa8337] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:09:05.455494  196707 system_pods.go:89] "storage-provisioner" [aadbfae4-4ea2-4d6b-be6d-ac97012be757] Running
	I1013 22:09:05.455516  196707 system_pods.go:126] duration metric: took 4.073087ms to wait for k8s-apps to be running ...
	I1013 22:09:05.455548  196707 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:09:05.455626  196707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:09:05.472455  196707 system_svc.go:56] duration metric: took 16.900529ms WaitForService to wait for kubelet
	I1013 22:09:05.472482  196707 kubeadm.go:586] duration metric: took 8.667117989s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:09:05.472501  196707 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:09:05.477912  196707 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:09:05.477977  196707 node_conditions.go:123] node cpu capacity is 2
	I1013 22:09:05.477990  196707 node_conditions.go:105] duration metric: took 5.484464ms to run NodePressure ...
	I1013 22:09:05.478019  196707 start.go:241] waiting for startup goroutines ...
	I1013 22:09:05.478034  196707 start.go:246] waiting for cluster config update ...
	I1013 22:09:05.478046  196707 start.go:255] writing updated cluster config ...
	I1013 22:09:05.478344  196707 ssh_runner.go:195] Run: rm -f paused
	I1013 22:09:05.482368  196707 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:09:05.547321  196707 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gkbv8" in "kube-system" namespace to be "Ready" or be gone ...
	W1013 22:09:07.569788  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:03.860187  199649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:09:03.860416  199649 start.go:159] libmachine.API.Create for "default-k8s-diff-port-007533" (driver="docker")
	I1013 22:09:03.860454  199649 client.go:168] LocalClient.Create starting
	I1013 22:09:03.860531  199649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:09:03.860563  199649 main.go:141] libmachine: Decoding PEM data...
	I1013 22:09:03.860576  199649 main.go:141] libmachine: Parsing certificate...
	I1013 22:09:03.860625  199649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:09:03.860641  199649 main.go:141] libmachine: Decoding PEM data...
	I1013 22:09:03.860656  199649 main.go:141] libmachine: Parsing certificate...
	I1013 22:09:03.861003  199649 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:09:03.881595  199649 cli_runner.go:211] docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:09:03.881663  199649 network_create.go:284] running [docker network inspect default-k8s-diff-port-007533] to gather additional debugging logs...
	I1013 22:09:03.881679  199649 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533
	W1013 22:09:03.899761  199649 cli_runner.go:211] docker network inspect default-k8s-diff-port-007533 returned with exit code 1
	I1013 22:09:03.899925  199649 network_create.go:287] error running [docker network inspect default-k8s-diff-port-007533]: docker network inspect default-k8s-diff-port-007533: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-007533 not found
	I1013 22:09:03.899943  199649 network_create.go:289] output of [docker network inspect default-k8s-diff-port-007533]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-007533 not found
	
	** /stderr **
	I1013 22:09:03.900031  199649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:09:03.936114  199649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:09:03.936602  199649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:09:03.936929  199649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:09:03.937308  199649 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dd0c0}
	I1013 22:09:03.937332  199649 network_create.go:124] attempt to create docker network default-k8s-diff-port-007533 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1013 22:09:03.937382  199649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 default-k8s-diff-port-007533
	I1013 22:09:04.033353  199649 network_create.go:108] docker network default-k8s-diff-port-007533 192.168.76.0/24 created
	I1013 22:09:04.033383  199649 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-007533" container
	I1013 22:09:04.033478  199649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:09:04.056834  199649 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-007533 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:09:04.087398  199649 oci.go:103] Successfully created a docker volume default-k8s-diff-port-007533
	I1013 22:09:04.087485  199649 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-007533-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --entrypoint /usr/bin/test -v default-k8s-diff-port-007533:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:09:04.822560  199649 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-007533
	I1013 22:09:04.822609  199649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:09:04.822627  199649 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:09:04.822697  199649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-007533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 22:09:10.056067  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:12.553823  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:09.541636  199649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-007533:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.718899317s)
	I1013 22:09:09.541679  199649 kic.go:203] duration metric: took 4.719047062s to extract preloaded images to volume ...
	W1013 22:09:09.541807  199649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:09:09.541920  199649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:09:09.629293  199649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-007533 --name default-k8s-diff-port-007533 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-007533 --network default-k8s-diff-port-007533 --ip 192.168.76.2 --volume default-k8s-diff-port-007533:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:09:10.040070  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Running}}
	I1013 22:09:10.074941  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:10.119574  199649 cli_runner.go:164] Run: docker exec default-k8s-diff-port-007533 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:09:10.191181  199649 oci.go:144] the created container "default-k8s-diff-port-007533" has a running status.
	I1013 22:09:10.191226  199649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa...
	I1013 22:09:10.393941  199649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:09:10.420585  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:10.445066  199649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:09:10.445084  199649 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-007533 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:09:10.521101  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:10.554881  199649 machine.go:93] provisionDockerMachine start ...
	I1013 22:09:10.554986  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:10.590478  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:10.590892  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:10.590903  199649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:09:10.591805  199649 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:09:13.743321  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:09:13.743348  199649 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-007533"
	I1013 22:09:13.743420  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:13.763181  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:13.763502  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:13.763520  199649 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-007533 && echo "default-k8s-diff-port-007533" | sudo tee /etc/hostname
	I1013 22:09:13.936593  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:09:13.936688  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:13.962507  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:13.962816  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:13.962840  199649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-007533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-007533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-007533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:09:14.124242  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:09:14.124270  199649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:09:14.124300  199649 ubuntu.go:190] setting up certificates
	I1013 22:09:14.124311  199649 provision.go:84] configureAuth start
	I1013 22:09:14.124392  199649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:09:14.146677  199649 provision.go:143] copyHostCerts
	I1013 22:09:14.146755  199649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:09:14.146770  199649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:09:14.146843  199649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:09:14.146944  199649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:09:14.146956  199649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:09:14.146986  199649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:09:14.147079  199649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:09:14.147090  199649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:09:14.147122  199649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:09:14.147191  199649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-007533 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-007533 localhost minikube]
	I1013 22:09:14.595539  199649 provision.go:177] copyRemoteCerts
	I1013 22:09:14.595607  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:09:14.595654  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:14.613553  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:14.725199  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:09:14.749991  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 22:09:14.782421  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:09:14.803414  199649 provision.go:87] duration metric: took 679.079133ms to configureAuth
	I1013 22:09:14.803440  199649 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:09:14.803624  199649 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:09:14.803733  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:14.828135  199649 main.go:141] libmachine: Using SSH client type: native
	I1013 22:09:14.828461  199649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1013 22:09:14.828477  199649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:09:15.214101  199649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:09:15.214126  199649 machine.go:96] duration metric: took 4.65922273s to provisionDockerMachine
	I1013 22:09:15.214136  199649 client.go:171] duration metric: took 11.353676095s to LocalClient.Create
	I1013 22:09:15.214149  199649 start.go:167] duration metric: took 11.353733998s to libmachine.API.Create "default-k8s-diff-port-007533"
	I1013 22:09:15.214157  199649 start.go:293] postStartSetup for "default-k8s-diff-port-007533" (driver="docker")
	I1013 22:09:15.214166  199649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:09:15.214230  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:09:15.214272  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.240735  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.352907  199649 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:09:15.358623  199649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:09:15.358649  199649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:09:15.358659  199649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:09:15.358718  199649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:09:15.358807  199649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:09:15.358928  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:09:15.370340  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:09:15.395099  199649 start.go:296] duration metric: took 180.928266ms for postStartSetup
	I1013 22:09:15.395455  199649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:09:15.419073  199649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:09:15.419341  199649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:09:15.419397  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.456275  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.569866  199649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:09:15.580944  199649 start.go:128] duration metric: took 11.724485747s to createHost
	I1013 22:09:15.581005  199649 start.go:83] releasing machines lock for "default-k8s-diff-port-007533", held for 11.724659461s
	I1013 22:09:15.581126  199649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:09:15.610485  199649 ssh_runner.go:195] Run: cat /version.json
	I1013 22:09:15.610538  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.610767  199649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:09:15.610827  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:15.641289  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.652036  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:15.755572  199649 ssh_runner.go:195] Run: systemctl --version
	I1013 22:09:15.881521  199649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:09:15.966142  199649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:09:15.971548  199649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:09:15.971691  199649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:09:16.014737  199649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:09:16.014760  199649 start.go:495] detecting cgroup driver to use...
	I1013 22:09:16.014804  199649 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:09:16.014870  199649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:09:16.047094  199649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:09:16.065227  199649 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:09:16.065301  199649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:09:16.093972  199649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:09:16.133005  199649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:09:16.336622  199649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:09:16.573046  199649 docker.go:234] disabling docker service ...
	I1013 22:09:16.573137  199649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:09:16.602629  199649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:09:16.622518  199649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:09:16.808556  199649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:09:16.974085  199649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:09:16.989507  199649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:09:17.007548  199649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:09:17.007669  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.019897  199649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:09:17.020018  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.035961  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.052571  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.066966  199649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:09:17.077448  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.091484  199649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.108939  199649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:09:17.120668  199649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:09:17.128801  199649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:09:17.137065  199649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:09:17.298582  199649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:09:17.770547  199649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:09:17.770631  199649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:09:17.775155  199649 start.go:563] Will wait 60s for crictl version
	I1013 22:09:17.775241  199649 ssh_runner.go:195] Run: which crictl
	I1013 22:09:17.779462  199649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:09:17.818255  199649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:09:17.818358  199649 ssh_runner.go:195] Run: crio --version
	I1013 22:09:17.853475  199649 ssh_runner.go:195] Run: crio --version
	I1013 22:09:17.892899  199649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1013 22:09:14.555117  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:16.566759  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:17.895814  199649 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:09:17.920962  199649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:09:17.926325  199649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:09:17.939530  199649 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:09:17.939641  199649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:09:17.939712  199649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:09:17.986627  199649 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:09:17.986649  199649 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:09:17.986709  199649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:09:18.028809  199649 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:09:18.028838  199649 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:09:18.028847  199649 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1013 22:09:18.028937  199649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-007533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:09:18.029039  199649 ssh_runner.go:195] Run: crio config
	I1013 22:09:18.122252  199649 cni.go:84] Creating CNI manager for ""
	I1013 22:09:18.122326  199649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:09:18.122363  199649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:09:18.122417  199649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-007533 NodeName:default-k8s-diff-port-007533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:09:18.122588  199649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-007533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:09:18.122714  199649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:09:18.131150  199649 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:09:18.131286  199649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:09:18.140488  199649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 22:09:18.153807  199649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:09:18.167268  199649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 22:09:18.180826  199649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:09:18.184916  199649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:09:18.196385  199649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:09:18.354101  199649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:09:18.378942  199649 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533 for IP: 192.168.76.2
	I1013 22:09:18.379017  199649 certs.go:195] generating shared ca certs ...
	I1013 22:09:18.379058  199649 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:18.379269  199649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:09:18.379379  199649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:09:18.379405  199649 certs.go:257] generating profile certs ...
	I1013 22:09:18.379498  199649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key
	I1013 22:09:18.379550  199649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt with IP's: []
	I1013 22:09:19.042459  199649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt ...
	I1013 22:09:19.042543  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt: {Name:mk33cf6d21f8105402a719fcdcb5867dc8ff2024 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:19.042756  199649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key ...
	I1013 22:09:19.042801  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key: {Name:mk8316bdc8d74f8e5a75398eb7d2e1bb2e8dfe2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:19.042951  199649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38
	I1013 22:09:19.043007  199649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1013 22:09:20.188334  199649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38 ...
	I1013 22:09:20.188412  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38: {Name:mkb81c2077e2bb51c5a0173e098c8c755a5cb4fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.188644  199649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38 ...
	I1013 22:09:20.188679  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38: {Name:mkd5ea86d3de7ff66b00960144398820cd664590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.188844  199649 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt.e8d90e38 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt
	I1013 22:09:20.188988  199649 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key
	I1013 22:09:20.189113  199649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key
	I1013 22:09:20.189163  199649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt with IP's: []
	I1013 22:09:20.478778  199649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt ...
	I1013 22:09:20.478810  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt: {Name:mk8007471a17e56e105af0084e17752cf6a507e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.479003  199649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key ...
	I1013 22:09:20.479022  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key: {Name:mkfa98cdbeff2a61ed92491341fe1447d5b86687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:20.479270  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:09:20.479320  199649 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:09:20.479335  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:09:20.479362  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:09:20.479389  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:09:20.479414  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:09:20.479465  199649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:09:20.480084  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:09:20.502086  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:09:20.526965  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:09:20.546548  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:09:20.567316  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 22:09:20.585629  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:09:20.611386  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:09:20.633444  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:09:20.656943  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:09:20.675327  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:09:20.697653  199649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:09:20.719000  199649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:09:20.735174  199649 ssh_runner.go:195] Run: openssl version
	I1013 22:09:20.741994  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:09:20.750702  199649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:09:20.756308  199649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:09:20.756451  199649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:09:20.813386  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:09:20.824030  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:09:20.838333  199649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:09:20.844242  199649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:09:20.844368  199649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:09:20.895701  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:09:20.905749  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:09:20.914275  199649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:09:20.917906  199649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:09:20.917981  199649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:09:20.961025  199649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:09:20.970904  199649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:09:20.975201  199649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:09:20.975260  199649 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:09:20.975336  199649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:09:20.975403  199649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:09:21.007240  199649 cri.go:89] found id: ""
	I1013 22:09:21.007327  199649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:09:21.015525  199649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:09:21.023842  199649 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:09:21.023951  199649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:09:21.031756  199649 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:09:21.031802  199649 kubeadm.go:157] found existing configuration files:
	
	I1013 22:09:21.031872  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1013 22:09:21.039648  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:09:21.039713  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:09:21.047356  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1013 22:09:21.055848  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:09:21.055931  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:09:21.063426  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1013 22:09:21.071198  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:09:21.071281  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:09:21.078499  199649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1013 22:09:21.086083  199649 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:09:21.086166  199649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:09:21.093634  199649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:09:21.136701  199649 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:09:21.137018  199649 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:09:21.159000  199649 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:09:21.159102  199649 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:09:21.159162  199649 kubeadm.go:318] OS: Linux
	I1013 22:09:21.159240  199649 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:09:21.159320  199649 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:09:21.159397  199649 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:09:21.159471  199649 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:09:21.159538  199649 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:09:21.159619  199649 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:09:21.159690  199649 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:09:21.159770  199649 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:09:21.159870  199649 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:09:21.229918  199649 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:09:21.230082  199649 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:09:21.230211  199649 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:09:21.239894  199649 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1013 22:09:19.062822  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:21.553722  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:21.245520  199649 out.go:252]   - Generating certificates and keys ...
	I1013 22:09:21.245689  199649 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:09:21.245798  199649 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:09:21.307735  199649 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:09:21.595059  199649 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:09:22.116884  199649 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:09:22.672125  199649 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:09:22.894002  199649 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:09:22.894642  199649 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-007533 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:09:23.301426  199649 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:09:23.301749  199649 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-007533 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1013 22:09:23.798005  199649 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:09:24.494536  199649 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:09:24.750411  199649 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:09:24.750696  199649 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:09:25.111608  199649 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:09:25.797923  199649 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:09:26.030538  199649 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:09:26.432338  199649 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:09:27.284922  199649 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:09:27.286141  199649 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:09:27.290801  199649 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1013 22:09:23.554395  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:26.053777  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:27.294415  199649 out.go:252]   - Booting up control plane ...
	I1013 22:09:27.294522  199649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:09:27.294610  199649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:09:27.295443  199649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:09:27.311582  199649 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:09:27.312090  199649 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:09:27.319974  199649 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:09:27.320569  199649 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:09:27.320652  199649 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:09:27.453725  199649 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:09:27.453850  199649 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:09:28.455486  199649 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001664549s
	I1013 22:09:28.461079  199649 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:09:28.461249  199649 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1013 22:09:28.461389  199649 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:09:28.461511  199649 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1013 22:09:28.055207  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:30.552440  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:32.554665  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:31.980123  199649 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.518631378s
	I1013 22:09:34.147447  199649 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.686381192s
	I1013 22:09:35.463637  199649 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002441818s
	I1013 22:09:35.491584  199649 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:09:35.505944  199649 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:09:35.523310  199649 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:09:35.523519  199649 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-007533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:09:35.539808  199649 kubeadm.go:318] [bootstrap-token] Using token: 7ed21a.d18grr5is41jwd6z
	I1013 22:09:35.542738  199649 out.go:252]   - Configuring RBAC rules ...
	I1013 22:09:35.542868  199649 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:09:35.552995  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:09:35.562078  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:09:35.568725  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:09:35.572819  199649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:09:35.577281  199649 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:09:35.871613  199649 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:09:36.316288  199649 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:09:36.873935  199649 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:09:36.875241  199649 kubeadm.go:318] 
	I1013 22:09:36.875326  199649 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:09:36.875340  199649 kubeadm.go:318] 
	I1013 22:09:36.875422  199649 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:09:36.875427  199649 kubeadm.go:318] 
	I1013 22:09:36.875511  199649 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:09:36.875579  199649 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:09:36.875636  199649 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:09:36.875646  199649 kubeadm.go:318] 
	I1013 22:09:36.875717  199649 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:09:36.875727  199649 kubeadm.go:318] 
	I1013 22:09:36.875821  199649 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:09:36.875837  199649 kubeadm.go:318] 
	I1013 22:09:36.875892  199649 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:09:36.875993  199649 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:09:36.876064  199649 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:09:36.876072  199649 kubeadm.go:318] 
	I1013 22:09:36.876156  199649 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:09:36.876236  199649 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:09:36.876244  199649 kubeadm.go:318] 
	I1013 22:09:36.876329  199649 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 7ed21a.d18grr5is41jwd6z \
	I1013 22:09:36.876441  199649 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:09:36.876464  199649 kubeadm.go:318] 	--control-plane 
	I1013 22:09:36.876468  199649 kubeadm.go:318] 
	I1013 22:09:36.876552  199649 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:09:36.876559  199649 kubeadm.go:318] 
	I1013 22:09:36.876640  199649 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 7ed21a.d18grr5is41jwd6z \
	I1013 22:09:36.876742  199649 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:09:36.882004  199649 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:09:36.882262  199649 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:09:36.882377  199649 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:09:36.882396  199649 cni.go:84] Creating CNI manager for ""
	I1013 22:09:36.882404  199649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:09:36.885448  199649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1013 22:09:35.053369  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	W1013 22:09:37.054260  196707 pod_ready.go:104] pod "coredns-66bc5c9577-gkbv8" is not "Ready", error: <nil>
	I1013 22:09:36.888385  199649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:09:36.892948  199649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:09:36.892972  199649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:09:36.911274  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:09:37.235707  199649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:09:37.235892  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:37.235965  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-007533 minikube.k8s.io/updated_at=2025_10_13T22_09_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=default-k8s-diff-port-007533 minikube.k8s.io/primary=true
	I1013 22:09:37.256420  199649 ops.go:34] apiserver oom_adj: -16
	I1013 22:09:37.452048  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:37.952622  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:38.452228  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:38.054543  196707 pod_ready.go:94] pod "coredns-66bc5c9577-gkbv8" is "Ready"
	I1013 22:09:38.054568  196707 pod_ready.go:86] duration metric: took 32.507218861s for pod "coredns-66bc5c9577-gkbv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.057916  196707 pod_ready.go:83] waiting for pod "etcd-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.063530  196707 pod_ready.go:94] pod "etcd-embed-certs-251758" is "Ready"
	I1013 22:09:38.063563  196707 pod_ready.go:86] duration metric: took 5.571223ms for pod "etcd-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.080104  196707 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.086191  196707 pod_ready.go:94] pod "kube-apiserver-embed-certs-251758" is "Ready"
	I1013 22:09:38.086276  196707 pod_ready.go:86] duration metric: took 6.073791ms for pod "kube-apiserver-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.089380  196707 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.251467  196707 pod_ready.go:94] pod "kube-controller-manager-embed-certs-251758" is "Ready"
	I1013 22:09:38.251536  196707 pod_ready.go:86] duration metric: took 162.080852ms for pod "kube-controller-manager-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.451599  196707 pod_ready.go:83] waiting for pod "kube-proxy-nmmdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:38.851419  196707 pod_ready.go:94] pod "kube-proxy-nmmdh" is "Ready"
	I1013 22:09:38.851447  196707 pod_ready.go:86] duration metric: took 399.82337ms for pod "kube-proxy-nmmdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:39.051354  196707 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:39.451489  196707 pod_ready.go:94] pod "kube-scheduler-embed-certs-251758" is "Ready"
	I1013 22:09:39.451515  196707 pod_ready.go:86] duration metric: took 400.132982ms for pod "kube-scheduler-embed-certs-251758" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:09:39.451527  196707 pod_ready.go:40] duration metric: took 33.969085594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:09:39.526554  196707 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:09:39.529960  196707 out.go:179] * Done! kubectl is now configured to use "embed-certs-251758" cluster and "default" namespace by default
	I1013 22:09:38.952847  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:39.452148  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:39.953012  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:40.452802  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:40.952719  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:41.452994  199649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:09:41.596275  199649 kubeadm.go:1113] duration metric: took 4.360428839s to wait for elevateKubeSystemPrivileges
	I1013 22:09:41.596312  199649 kubeadm.go:402] duration metric: took 20.62105565s to StartCluster
	I1013 22:09:41.596330  199649 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:41.596435  199649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:09:41.598040  199649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:09:41.598285  199649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:09:41.598597  199649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:09:41.598979  199649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:09:41.599074  199649 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-007533"
	I1013 22:09:41.599104  199649 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-007533"
	I1013 22:09:41.599146  199649 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:09:41.599186  199649 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:09:41.599253  199649 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-007533"
	I1013 22:09:41.599270  199649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-007533"
	I1013 22:09:41.599723  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:41.599833  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:41.602623  199649 out.go:179] * Verifying Kubernetes components...
	I1013 22:09:41.606117  199649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:09:41.643340  199649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:09:41.645219  199649 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-007533"
	I1013 22:09:41.645257  199649 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:09:41.645670  199649 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:09:41.646510  199649 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:09:41.646531  199649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:09:41.646575  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:41.688788  199649 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:09:41.688808  199649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:09:41.688868  199649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:09:41.690483  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:41.722569  199649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:09:41.947244  199649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:09:41.947413  199649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:09:42.022269  199649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:09:42.116206  199649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:09:42.589157  199649 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1013 22:09:42.590373  199649 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:09:42.842176  199649 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:09:42.845070  199649 addons.go:514] duration metric: took 1.246076078s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:09:43.094196  199649 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-007533" context rescaled to 1 replicas
	W1013 22:09:44.594978  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:09:47.093226  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:09:49.093448  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:09:51.098093  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.591285309Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e2174865-3332-4da8-a2c2-e26978eabb85 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.592249214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3083e23-9da0-4b6a-aa8b-0699a6d19d83 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.593530011Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper" id=e3228a7b-d8a5-402b-ac68-39d9768e70a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.593775215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.600694211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.601333062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.617527877Z" level=info msg="Created container c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper" id=e3228a7b-d8a5-402b-ac68-39d9768e70a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.618700705Z" level=info msg="Starting container: c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae" id=85b55705-e3ef-4789-9e16-ad8accb88a0f name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.620316138Z" level=info msg="Started container" PID=1671 containerID=c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper id=85b55705-e3ef-4789-9e16-ad8accb88a0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806
	Oct 13 22:09:40 embed-certs-251758 conmon[1669]: conmon c3551818d9baeddb52ce <ninfo>: container 1671 exited with status 1
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.834019675Z" level=info msg="Removing container: 04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60" id=75c67bc5-66f8-490c-84ed-4ee3a6776f30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.841396776Z" level=info msg="Error loading conmon cgroup of container 04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60: cgroup deleted" id=75c67bc5-66f8-490c-84ed-4ee3a6776f30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:09:40 embed-certs-251758 crio[649]: time="2025-10-13T22:09:40.845647803Z" level=info msg="Removed container 04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg/dashboard-metrics-scraper" id=75c67bc5-66f8-490c-84ed-4ee3a6776f30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.934021685Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.938394596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.938427678Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.938454204Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.941507222Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.941539058Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.941560596Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.944558263Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.944590894Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.944610069Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.947515973Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:09:44 embed-certs-251758 crio[649]: time="2025-10-13T22:09:44.947552895Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c3551818d9bae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   9203e0a94da15       dashboard-metrics-scraper-6ffb444bf9-mjklg   kubernetes-dashboard
	6fc9ddef880b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   91e42ae8a15bc       storage-provisioner                          kube-system
	a7b4e396ad98c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   7efc3209ecbfb       kubernetes-dashboard-855c9754f9-txgzm        kubernetes-dashboard
	867161f640012       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   e24cef4c37481       coredns-66bc5c9577-gkbv8                     kube-system
	f0db61f244882       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   a924d4dd12986       busybox                                      default
	79d9d302bbf1a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   13d73902d02de       kindnet-csh4p                                kube-system
	6d3d93734554a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   91e42ae8a15bc       storage-provisioner                          kube-system
	90dfa1eb353c5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   38f259a1a8743       kube-proxy-nmmdh                             kube-system
	aae750af84a55       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c957600faac77       kube-apiserver-embed-certs-251758            kube-system
	584a98c7ea440       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1740c9f3bca64       etcd-embed-certs-251758                      kube-system
	9c6989f62c117       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9326b20e3faad       kube-controller-manager-embed-certs-251758   kube-system
	5f76aa65f805b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ba176c67a4639       kube-scheduler-embed-certs-251758            kube-system
	
	
	==> coredns [867161f640012fdfebf1dab5ae2b56691570ae088f99d8eb681cb0a4d8504d85] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37855 - 45940 "HINFO IN 4803929117046417237.4397713878805235932. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023101931s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-251758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-251758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=embed-certs-251758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_07_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:07:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-251758
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:09:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:09:44 +0000   Mon, 13 Oct 2025 22:08:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-251758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 204b64ea2a9c4279bdcc32d1fd6a1957
	  System UUID:                f24253cd-26e9-4717-a721-e240cb5f208d
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-gkbv8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-251758                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-csh4p                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-251758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-251758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-nmmdh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-251758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mjklg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-txgzm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x7 over 2m30s)  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-251758 event: Registered Node embed-certs-251758 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-251758 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node embed-certs-251758 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node embed-certs-251758 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node embed-certs-251758 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node embed-certs-251758 event: Registered Node embed-certs-251758 in Controller
	
	
	==> dmesg <==
	[Oct13 21:40] overlayfs: idmapped layers are currently not supported
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [584a98c7ea4404c695d25b77ddef1fab1aca6fa39f58483da8a818a558fb996c] <==
	{"level":"warn","ts":"2025-10-13T22:09:00.222832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.265658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.289465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.315158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.341643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.372713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.397456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.436397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.452838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.499888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.510005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.558486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.588588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.611987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.643916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.677831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.692831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.717594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.756629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.761191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.802994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.828666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.864920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:00.907518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:01.119786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43692","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:09:56 up  1:52,  0 user,  load average: 3.35, 2.96, 2.32
	Linux embed-certs-251758 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79d9d302bbf1a83c8987224c8f4facc00eeb460f55b3bfa8c4bf25cd20012882] <==
	I1013 22:09:04.727896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:09:04.728117       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:09:04.736506       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:09:04.736535       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:09:04.736553       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:09:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:09:04.933679       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:09:04.933753       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:09:04.933796       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:09:04.934506       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:09:34.934255       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:09:34.934475       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:09:34.934631       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1013 22:09:34.934693       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1013 22:09:36.533989       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:09:36.534086       1 metrics.go:72] Registering metrics
	I1013 22:09:36.534180       1 controller.go:711] "Syncing nftables rules"
	I1013 22:09:44.933731       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:09:44.933782       1 main.go:301] handling current node
	I1013 22:09:54.935885       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1013 22:09:54.935919       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aae750af84a55af314225c0685c8ae60d5e9a75591e1edfaf24d63b4ef9dacec] <==
	I1013 22:09:02.899303       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:09:02.899537       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 22:09:02.899967       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:09:02.908000       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:09:02.908123       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:09:02.908135       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:09:02.908238       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 22:09:02.908280       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:09:02.916563       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:09:02.922852       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:09:02.924376       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:09:02.924420       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:09:02.924581       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1013 22:09:02.972810       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:09:03.262168       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:09:03.694238       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:09:04.675065       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:09:05.057737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:09:05.163216       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:09:05.211469       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:09:05.386402       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.119.16"}
	I1013 22:09:05.411730       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.245.23"}
	I1013 22:09:07.545131       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:09:07.597083       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:09:07.821695       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9c6989f62c1172b0c0f363d4229f6d6e18f8427d7c917aa15eacb2457bfad0a2] <==
	I1013 22:09:07.349549       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 22:09:07.349576       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 22:09:07.353464       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:09:07.354166       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:09:07.356757       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:09:07.356786       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:09:07.356802       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:09:07.357200       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:09:07.385745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:09:07.385846       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:09:07.385876       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:09:07.386134       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:09:07.386932       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:09:07.391985       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:09:07.392002       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:09:07.401729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:09:07.407765       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:09:07.414725       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:09:07.417085       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 22:09:07.421239       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:09:07.421386       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:09:07.421499       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-251758"
	I1013 22:09:07.421573       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:09:07.439896       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:09:07.440004       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [90dfa1eb353c58a620786cbe9e0b45cd92ede40e22ea29e7b93ccc4a41008baf] <==
	I1013 22:09:04.244722       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:09:05.399443       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:09:05.502848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:09:05.502959       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 22:09:05.503082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:09:05.529887       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:09:05.530001       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:09:05.534328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:09:05.534693       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:09:05.534973       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:09:05.536574       1 config.go:200] "Starting service config controller"
	I1013 22:09:05.536639       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:09:05.536679       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:09:05.536706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:09:05.536743       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:09:05.536771       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:09:05.540775       1 config.go:309] "Starting node config controller"
	I1013 22:09:05.540847       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:09:05.540879       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:09:05.637294       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:09:05.637417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:09:05.637260       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5f76aa65f805b69f7a41cf737a66368820d106aa35a1bd6fad89654cbc4c61aa] <==
	I1013 22:09:01.094350       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:09:04.714378       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:09:04.714412       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:09:04.750093       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:09:04.750202       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:09:04.750221       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:09:04.750246       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:09:04.764590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:09:04.764613       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:09:04.764633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:09:04.764642       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:09:04.850900       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:09:04.867321       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:09:04.867572       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.931620     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d92af1b4-675d-48d6-b1e5-f1e88ecad032-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-txgzm\" (UID: \"d92af1b4-675d-48d6-b1e5-f1e88ecad032\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-txgzm"
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.932431     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdtb\" (UniqueName: \"kubernetes.io/projected/d92af1b4-675d-48d6-b1e5-f1e88ecad032-kube-api-access-dxdtb\") pod \"kubernetes-dashboard-855c9754f9-txgzm\" (UID: \"d92af1b4-675d-48d6-b1e5-f1e88ecad032\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-txgzm"
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.932472     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fd98834a-1072-43cc-8122-18f42b378902-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-mjklg\" (UID: \"fd98834a-1072-43cc-8122-18f42b378902\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg"
	Oct 13 22:09:07 embed-certs-251758 kubelet[773]: I1013 22:09:07.932508     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xpk\" (UniqueName: \"kubernetes.io/projected/fd98834a-1072-43cc-8122-18f42b378902-kube-api-access-48xpk\") pod \"dashboard-metrics-scraper-6ffb444bf9-mjklg\" (UID: \"fd98834a-1072-43cc-8122-18f42b378902\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg"
	Oct 13 22:09:08 embed-certs-251758 kubelet[773]: W1013 22:09:08.187925     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/crio-7efc3209ecbfb753f685fbb8173b50e12e5797ef32af01645e00eecc112bdc1b WatchSource:0}: Error finding container 7efc3209ecbfb753f685fbb8173b50e12e5797ef32af01645e00eecc112bdc1b: Status 404 returned error can't find the container with id 7efc3209ecbfb753f685fbb8173b50e12e5797ef32af01645e00eecc112bdc1b
	Oct 13 22:09:08 embed-certs-251758 kubelet[773]: W1013 22:09:08.212710     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bce2b62de8b18e7b3cd1dad60cf5b9b624701b45de81bf20dc286e3c67300396/crio-9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806 WatchSource:0}: Error finding container 9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806: Status 404 returned error can't find the container with id 9203e0a94da1573f11ede02a2f19b39b28970786fb082293ace23d25cfa3e806
	Oct 13 22:09:14 embed-certs-251758 kubelet[773]: I1013 22:09:14.777127     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-txgzm" podStartSLOduration=1.293617922 podStartE2EDuration="7.777110104s" podCreationTimestamp="2025-10-13 22:09:07 +0000 UTC" firstStartedPulling="2025-10-13 22:09:08.191573462 +0000 UTC m=+12.913246793" lastFinishedPulling="2025-10-13 22:09:14.675065645 +0000 UTC m=+19.396738975" observedRunningTime="2025-10-13 22:09:14.776374763 +0000 UTC m=+19.498048102" watchObservedRunningTime="2025-10-13 22:09:14.777110104 +0000 UTC m=+19.498783435"
	Oct 13 22:09:21 embed-certs-251758 kubelet[773]: I1013 22:09:21.778241     773 scope.go:117] "RemoveContainer" containerID="b047f341f317c0f7e2a55e121f7ea44caf1d0f6d3f3b5d3696b6cb8d77ea4971"
	Oct 13 22:09:22 embed-certs-251758 kubelet[773]: I1013 22:09:22.782428     773 scope.go:117] "RemoveContainer" containerID="b047f341f317c0f7e2a55e121f7ea44caf1d0f6d3f3b5d3696b6cb8d77ea4971"
	Oct 13 22:09:22 embed-certs-251758 kubelet[773]: I1013 22:09:22.783386     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:22 embed-certs-251758 kubelet[773]: E1013 22:09:22.783627     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:23 embed-certs-251758 kubelet[773]: I1013 22:09:23.787035     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:23 embed-certs-251758 kubelet[773]: E1013 22:09:23.787765     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:28 embed-certs-251758 kubelet[773]: I1013 22:09:28.138280     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:28 embed-certs-251758 kubelet[773]: E1013 22:09:28.138974     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:34 embed-certs-251758 kubelet[773]: I1013 22:09:34.813121     773 scope.go:117] "RemoveContainer" containerID="6d3d93734554a1ed7c0be2214b61d891319ff15030b44abcfa42a8cad20a6268"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: I1013 22:09:40.590724     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: I1013 22:09:40.830961     773 scope.go:117] "RemoveContainer" containerID="04c3ca1d5cffa5b5f71ffd2f7d2ba3e52a4517235c6cc59a13f0f5f938d6ae60"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: I1013 22:09:40.831258     773 scope.go:117] "RemoveContainer" containerID="c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae"
	Oct 13 22:09:40 embed-certs-251758 kubelet[773]: E1013 22:09:40.831416     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:48 embed-certs-251758 kubelet[773]: I1013 22:09:48.138051     773 scope.go:117] "RemoveContainer" containerID="c3551818d9baeddb52ce2c4239dc602b788f3242a90459a1844f4ab66a1d96ae"
	Oct 13 22:09:48 embed-certs-251758 kubelet[773]: E1013 22:09:48.138662     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mjklg_kubernetes-dashboard(fd98834a-1072-43cc-8122-18f42b378902)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mjklg" podUID="fd98834a-1072-43cc-8122-18f42b378902"
	Oct 13 22:09:51 embed-certs-251758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:09:51 embed-certs-251758 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:09:51 embed-certs-251758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a7b4e396ad98c890c97a0efa0e70d34ea49729a1e195184cb861210865588c8c] <==
	2025/10/13 22:09:14 Using namespace: kubernetes-dashboard
	2025/10/13 22:09:14 Using in-cluster config to connect to apiserver
	2025/10/13 22:09:14 Using secret token for csrf signing
	2025/10/13 22:09:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:09:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:09:14 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:09:14 Generating JWE encryption key
	2025/10/13 22:09:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:09:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:09:16 Initializing JWE encryption key from synchronized object
	2025/10/13 22:09:16 Creating in-cluster Sidecar client
	2025/10/13 22:09:16 Serving insecurely on HTTP port: 9090
	2025/10/13 22:09:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:09:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:09:14 Starting overwatch
	
	
	==> storage-provisioner [6d3d93734554a1ed7c0be2214b61d891319ff15030b44abcfa42a8cad20a6268] <==
	I1013 22:09:04.534658       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:09:34.537160       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6fc9ddef880b60d90c1173448c140aac193f748165971840fb9a1cfdc4aa1d70] <==
	I1013 22:09:34.898548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:09:34.923920       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:09:34.924048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:09:34.926658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:38.382170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:42.642516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:46.240634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:49.294100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:52.316179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:52.321180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:09:52.321317       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:09:52.321486       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-251758_f4e0248b-9f45-4ac8-a76a-d608d0fcc10a!
	I1013 22:09:52.322341       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56f6d001-d20d-4495-ba7d-2f8ddd8e7ade", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-251758_f4e0248b-9f45-4ac8-a76a-d608d0fcc10a became leader
	W1013 22:09:52.329965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:52.353466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:09:52.422563       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-251758_f4e0248b-9f45-4ac8-a76a-d608d0fcc10a!
	W1013 22:09:54.356447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:54.362246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:56.366684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:09:56.373894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-251758 -n embed-certs-251758
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-251758 -n embed-certs-251758: exit status 2 (355.217713ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-251758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.034801ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:10:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-007533 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-007533 describe deploy/metrics-server -n kube-system: exit status 1 (113.829175ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-007533 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-007533
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-007533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f",
	        "Created": "2025-10-13T22:09:09.643322038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:09:09.741753035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/hosts",
	        "LogPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f-json.log",
	        "Name": "/default-k8s-diff-port-007533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-007533:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-007533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f",
	                "LowerDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-007533",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-007533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-007533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-007533",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-007533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e244ddffb5f55461109f5ec48537d343a2eeb26a495b314dc511233c99dac1d",
	            "SandboxKey": "/var/run/docker/netns/3e244ddffb5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-007533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:1d:08:93:13:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c207adec0a146b3ee3021b2c1eb78ecdd6cde3a3946c5c593fd373dfc1a3d79d",
	                    "EndpointID": "6b203ac4992ff6598db68ce34dc34a2cf76353efcad8355da00a105e4926e867",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-007533",
	                        "42b7859eebb1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-007533 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-007533 logs -n 25: (1.513946816s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-061725 image list --format=json                                                                                                                                                                                               │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:06 UTC │
	│ pause   │ -p old-k8s-version-061725 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │                     │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ stop    │ -p no-preload-998398 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ addons  │ enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:10 UTC │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:10:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:10:00.478169  204091 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:10:00.478494  204091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:00.478509  204091 out.go:374] Setting ErrFile to fd 2...
	I1013 22:10:00.478515  204091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:00.478906  204091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:10:00.479722  204091 out.go:368] Setting JSON to false
	I1013 22:10:00.483577  204091 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6735,"bootTime":1760386666,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:10:00.484073  204091 start.go:141] virtualization:  
	I1013 22:10:00.488639  204091 out.go:179] * [newest-cni-400889] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:10:00.493601  204091 notify.go:220] Checking for updates...
	I1013 22:10:00.493608  204091 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:10:00.497329  204091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:10:00.500918  204091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:00.504470  204091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:10:00.507944  204091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:10:00.511642  204091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:10:00.517656  204091 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:00.517934  204091 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:10:00.550819  204091 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:10:00.551000  204091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:00.619311  204091 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:10:00.609250765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:00.619444  204091 docker.go:318] overlay module found
	I1013 22:10:00.622730  204091 out.go:179] * Using the docker driver based on user configuration
	I1013 22:10:00.625674  204091 start.go:305] selected driver: docker
	I1013 22:10:00.625699  204091 start.go:925] validating driver "docker" against <nil>
	I1013 22:10:00.625715  204091 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:10:00.626519  204091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:00.686470  204091 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:10:00.676519304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:00.686636  204091 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1013 22:10:00.686676  204091 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1013 22:10:00.686915  204091 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:10:00.690197  204091 out.go:179] * Using Docker driver with root privileges
	I1013 22:10:00.693035  204091 cni.go:84] Creating CNI manager for ""
	I1013 22:10:00.693113  204091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:00.693126  204091 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:10:00.693216  204091 start.go:349] cluster config:
	{Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:00.698480  204091 out.go:179] * Starting "newest-cni-400889" primary control-plane node in "newest-cni-400889" cluster
	I1013 22:10:00.701301  204091 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:10:00.704432  204091 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:10:00.707326  204091 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:00.707392  204091 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:10:00.707407  204091 cache.go:58] Caching tarball of preloaded images
	I1013 22:10:00.707427  204091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:10:00.707503  204091 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:10:00.707515  204091 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:10:00.707638  204091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json ...
	I1013 22:10:00.707668  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json: {Name:mk3eb9ca30ea512cd35124a1e85c1aa47db49843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:00.729244  204091 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:10:00.729278  204091 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:10:00.729309  204091 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:10:00.729344  204091 start.go:360] acquireMachinesLock for newest-cni-400889: {Name:mk77b7fd736221bee1ff61b7d071134f1c9c511b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:10:00.729471  204091 start.go:364] duration metric: took 100.666µs to acquireMachinesLock for "newest-cni-400889"
	I1013 22:10:00.729505  204091 start.go:93] Provisioning new machine with config: &{Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:10:00.729578  204091 start.go:125] createHost starting for "" (driver="docker")
	W1013 22:09:58.593856  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:00.594587  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:03.094421  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:00.733173  204091 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:10:00.733432  204091 start.go:159] libmachine.API.Create for "newest-cni-400889" (driver="docker")
	I1013 22:10:00.733487  204091 client.go:168] LocalClient.Create starting
	I1013 22:10:00.733585  204091 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:10:00.733627  204091 main.go:141] libmachine: Decoding PEM data...
	I1013 22:10:00.733640  204091 main.go:141] libmachine: Parsing certificate...
	I1013 22:10:00.733702  204091 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:10:00.733721  204091 main.go:141] libmachine: Decoding PEM data...
	I1013 22:10:00.733731  204091 main.go:141] libmachine: Parsing certificate...
	I1013 22:10:00.734134  204091 cli_runner.go:164] Run: docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:10:00.751316  204091 cli_runner.go:211] docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:10:00.751399  204091 network_create.go:284] running [docker network inspect newest-cni-400889] to gather additional debugging logs...
	I1013 22:10:00.751421  204091 cli_runner.go:164] Run: docker network inspect newest-cni-400889
	W1013 22:10:00.768470  204091 cli_runner.go:211] docker network inspect newest-cni-400889 returned with exit code 1
	I1013 22:10:00.768504  204091 network_create.go:287] error running [docker network inspect newest-cni-400889]: docker network inspect newest-cni-400889: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-400889 not found
	I1013 22:10:00.768518  204091 network_create.go:289] output of [docker network inspect newest-cni-400889]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-400889 not found
	
	** /stderr **
	I1013 22:10:00.768624  204091 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:00.786382  204091 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:10:00.786735  204091 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:10:00.787146  204091 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:10:00.787464  204091 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c207adec0a14 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:30:41:df:49:ee} reservation:<nil>}
	I1013 22:10:00.788111  204091 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ef050}
	I1013 22:10:00.788139  204091 network_create.go:124] attempt to create docker network newest-cni-400889 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:10:00.788221  204091 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-400889 newest-cni-400889
	I1013 22:10:00.852515  204091 network_create.go:108] docker network newest-cni-400889 192.168.85.0/24 created
	I1013 22:10:00.852546  204091 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-400889" container
	I1013 22:10:00.852628  204091 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:10:00.869480  204091 cli_runner.go:164] Run: docker volume create newest-cni-400889 --label name.minikube.sigs.k8s.io=newest-cni-400889 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:10:00.890249  204091 oci.go:103] Successfully created a docker volume newest-cni-400889
	I1013 22:10:00.890352  204091 cli_runner.go:164] Run: docker run --rm --name newest-cni-400889-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-400889 --entrypoint /usr/bin/test -v newest-cni-400889:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:10:01.456519  204091 oci.go:107] Successfully prepared a docker volume newest-cni-400889
	I1013 22:10:01.456592  204091 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:01.456655  204091 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:10:01.456745  204091 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-400889:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 22:10:05.095953  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:07.594208  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:05.916860  204091 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-400889:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.460063699s)
	I1013 22:10:05.916901  204091 kic.go:203] duration metric: took 4.460243296s to extract preloaded images to volume ...
	W1013 22:10:05.917038  204091 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:10:05.917156  204091 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:10:05.977342  204091 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-400889 --name newest-cni-400889 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-400889 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-400889 --network newest-cni-400889 --ip 192.168.85.2 --volume newest-cni-400889:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:10:06.285949  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Running}}
	I1013 22:10:06.309737  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:06.335916  204091 cli_runner.go:164] Run: docker exec newest-cni-400889 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:10:06.406554  204091 oci.go:144] the created container "newest-cni-400889" has a running status.
	I1013 22:10:06.406598  204091 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa...
	I1013 22:10:06.716496  204091 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:10:06.749257  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:06.775300  204091 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:10:06.775332  204091 kic_runner.go:114] Args: [docker exec --privileged newest-cni-400889 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:10:06.861825  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:06.887503  204091 machine.go:93] provisionDockerMachine start ...
	I1013 22:10:06.887593  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:06.916071  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:06.916496  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:06.916530  204091 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:10:06.917268  204091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:10:10.075995  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:10.076023  204091 ubuntu.go:182] provisioning hostname "newest-cni-400889"
	I1013 22:10:10.076089  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:10.095298  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:10.095630  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:10.095648  204091 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-400889 && echo "newest-cni-400889" | sudo tee /etc/hostname
	I1013 22:10:10.255418  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:10.255539  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:10.273796  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:10.274111  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:10.274134  204091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-400889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-400889/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-400889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:10:10.420331  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:10:10.420356  204091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:10:10.420375  204091 ubuntu.go:190] setting up certificates
	I1013 22:10:10.420427  204091 provision.go:84] configureAuth start
	I1013 22:10:10.420503  204091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:10.440228  204091 provision.go:143] copyHostCerts
	I1013 22:10:10.440311  204091 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:10:10.440331  204091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:10:10.440495  204091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:10:10.440670  204091 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:10:10.440688  204091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:10:10.440727  204091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:10:10.440807  204091 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:10:10.440822  204091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:10:10.440858  204091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:10:10.440920  204091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.newest-cni-400889 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-400889]
	I1013 22:10:11.061433  204091 provision.go:177] copyRemoteCerts
	I1013 22:10:11.061500  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:10:11.061547  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.080534  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.187588  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:10:11.207134  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:10:11.225788  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:10:11.245433  204091 provision.go:87] duration metric: took 824.975589ms to configureAuth
	I1013 22:10:11.245458  204091 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:10:11.245657  204091 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:11.245762  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.263193  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:11.263502  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:11.263516  204091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:10:11.524286  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:10:11.524313  204091 machine.go:96] duration metric: took 4.636789761s to provisionDockerMachine
	I1013 22:10:11.524323  204091 client.go:171] duration metric: took 10.790826534s to LocalClient.Create
	I1013 22:10:11.524336  204091 start.go:167] duration metric: took 10.790906262s to libmachine.API.Create "newest-cni-400889"
	I1013 22:10:11.524344  204091 start.go:293] postStartSetup for "newest-cni-400889" (driver="docker")
	I1013 22:10:11.524353  204091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:10:11.524425  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:10:11.524487  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.545539  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.647997  204091 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:10:11.651399  204091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:10:11.651431  204091 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:10:11.651442  204091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:10:11.651497  204091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:10:11.651596  204091 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:10:11.651708  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:10:11.659508  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:11.677502  204091 start.go:296] duration metric: took 153.144333ms for postStartSetup
	I1013 22:10:11.677899  204091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:11.694488  204091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json ...
	I1013 22:10:11.694795  204091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:10:11.694844  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.712165  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.816707  204091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:10:11.821621  204091 start.go:128] duration metric: took 11.092028421s to createHost
	I1013 22:10:11.821642  204091 start.go:83] releasing machines lock for "newest-cni-400889", held for 11.092156007s
	I1013 22:10:11.821707  204091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:11.838654  204091 ssh_runner.go:195] Run: cat /version.json
	I1013 22:10:11.838721  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.838971  204091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:10:11.839035  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.871593  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.881371  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.975493  204091 ssh_runner.go:195] Run: systemctl --version
	I1013 22:10:12.073510  204091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:10:12.124864  204091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:10:12.129297  204091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:10:12.129364  204091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:10:12.165974  204091 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:10:12.165994  204091 start.go:495] detecting cgroup driver to use...
	I1013 22:10:12.166026  204091 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:10:12.166100  204091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:10:12.185037  204091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:10:12.199961  204091 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:10:12.200079  204091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:10:12.218559  204091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:10:12.237516  204091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:10:12.364415  204091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:10:12.482792  204091 docker.go:234] disabling docker service ...
	I1013 22:10:12.482886  204091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:10:12.504826  204091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:10:12.520475  204091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:10:12.660393  204091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:10:12.784225  204091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:10:12.797294  204091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:10:12.813993  204091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:10:12.814105  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.823579  204091 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:10:12.823697  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.832685  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.841870  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.854931  204091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:10:12.864132  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.873908  204091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.887412  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.896891  204091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:10:12.904587  204091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:10:12.911977  204091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:13.032106  204091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:10:13.171099  204091 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:10:13.171172  204091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:10:13.175201  204091 start.go:563] Will wait 60s for crictl version
	I1013 22:10:13.175265  204091 ssh_runner.go:195] Run: which crictl
	I1013 22:10:13.178924  204091 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:10:13.203885  204091 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:10:13.203969  204091 ssh_runner.go:195] Run: crio --version
	I1013 22:10:13.232233  204091 ssh_runner.go:195] Run: crio --version
	I1013 22:10:13.266856  204091 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:10:13.269668  204091 cli_runner.go:164] Run: docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:13.285346  204091 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:10:13.289019  204091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:13.301034  204091 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1013 22:10:09.594370  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:12.093214  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:13.303883  204091 kubeadm.go:883] updating cluster {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:10:13.304027  204091 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:13.304107  204091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:13.344245  204091 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:13.344270  204091 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:10:13.344331  204091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:13.369734  204091 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:13.369759  204091 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:10:13.369767  204091 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:10:13.369897  204091 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-400889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:10:13.369994  204091 ssh_runner.go:195] Run: crio config
	I1013 22:10:13.429277  204091 cni.go:84] Creating CNI manager for ""
	I1013 22:10:13.429303  204091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:13.429317  204091 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 22:10:13.429341  204091 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-400889 NodeName:newest-cni-400889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:10:13.429470  204091 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-400889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:10:13.429543  204091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:10:13.437764  204091 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:10:13.437886  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:10:13.446359  204091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:10:13.459385  204091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:10:13.473876  204091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 22:10:13.487277  204091 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:10:13.490757  204091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:13.500625  204091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:13.617143  204091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:13.633860  204091 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889 for IP: 192.168.85.2
	I1013 22:10:13.633895  204091 certs.go:195] generating shared ca certs ...
	I1013 22:10:13.633911  204091 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:13.634078  204091 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:10:13.634143  204091 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:10:13.634156  204091 certs.go:257] generating profile certs ...
	I1013 22:10:13.634231  204091 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key
	I1013 22:10:13.635081  204091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.crt with IP's: []
	I1013 22:10:14.490604  204091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.crt ...
	I1013 22:10:14.490643  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.crt: {Name:mkf6565e43552edf412ea6ea3109c96d0aa4ca13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.490833  204091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key ...
	I1013 22:10:14.490850  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key: {Name:mk2153306a80c6b7a2366c6828c9b73ef42b023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.490928  204091 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4
	I1013 22:10:14.490949  204091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:10:14.801968  204091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4 ...
	I1013 22:10:14.801997  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4: {Name:mka1d1df894ff9c02f408a5e94952ede7bd4010b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.802178  204091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4 ...
	I1013 22:10:14.802194  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4: {Name:mk7b0b54a26c406e8fcba65457db9ade23c57492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.802288  204091 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt
	I1013 22:10:14.802369  204091 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key
	I1013 22:10:14.802431  204091 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key
	I1013 22:10:14.802452  204091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt with IP's: []
	I1013 22:10:15.206667  204091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt ...
	I1013 22:10:15.206695  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt: {Name:mk8acecebd4a120699d4dda0449e92296a18736b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:15.206895  204091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key ...
	I1013 22:10:15.206913  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key: {Name:mke2d5fb98c5f0ae744d22d1e1540598f99bd4f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:15.207125  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:10:15.207172  204091 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:10:15.207187  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:10:15.207211  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:10:15.207238  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:10:15.207264  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:10:15.207313  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:15.207891  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:10:15.226226  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:10:15.244260  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:10:15.261702  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:10:15.279294  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:10:15.298584  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:10:15.317658  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:10:15.340079  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:10:15.358089  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:10:15.376971  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:10:15.395972  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:10:15.414043  204091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:10:15.427013  204091 ssh_runner.go:195] Run: openssl version
	I1013 22:10:15.433312  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:10:15.442079  204091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:10:15.445841  204091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:10:15.445949  204091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:10:15.488637  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:10:15.497110  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:10:15.505620  204091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:15.509572  204091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:15.509678  204091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:15.551108  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:10:15.559681  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:10:15.568379  204091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:10:15.572142  204091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:10:15.572207  204091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:10:15.613942  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:10:15.622324  204091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:10:15.626056  204091 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:10:15.626124  204091 kubeadm.go:400] StartCluster: {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:15.626207  204091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:10:15.626277  204091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:10:15.658266  204091 cri.go:89] found id: ""
	I1013 22:10:15.658334  204091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:10:15.666092  204091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:10:15.673870  204091 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:10:15.673934  204091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:10:15.681973  204091 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:10:15.681995  204091 kubeadm.go:157] found existing configuration files:
	
	I1013 22:10:15.682055  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:10:15.690198  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:10:15.690277  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:10:15.697853  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:10:15.706252  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:10:15.706315  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:10:15.713607  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:10:15.721545  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:10:15.721610  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:10:15.728937  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:10:15.736815  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:10:15.736901  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:10:15.744505  204091 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:10:15.788211  204091 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:10:15.788538  204091 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:10:15.819766  204091 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:10:15.819896  204091 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:10:15.819940  204091 kubeadm.go:318] OS: Linux
	I1013 22:10:15.819992  204091 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:10:15.820046  204091 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:10:15.820099  204091 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:10:15.820156  204091 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:10:15.820210  204091 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:10:15.820264  204091 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:10:15.820314  204091 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:10:15.820368  204091 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:10:15.820432  204091 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:10:15.913822  204091 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:10:15.913965  204091 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:10:15.914078  204091 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:10:15.928259  204091 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1013 22:10:14.096476  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:16.593738  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:15.931716  204091 out.go:252]   - Generating certificates and keys ...
	I1013 22:10:15.931834  204091 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:10:15.931909  204091 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:10:17.023507  204091 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:10:17.314418  204091 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:10:18.621107  204091 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:10:18.853521  204091 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1013 22:10:18.594353  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:21.094148  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:23.097090  199649 node_ready.go:49] node "default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:23.097118  199649 node_ready.go:38] duration metric: took 40.506685569s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:10:23.097132  199649 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:10:23.097190  199649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:10:23.109710  199649 api_server.go:72] duration metric: took 41.511387878s to wait for apiserver process to appear ...
	I1013 22:10:23.109732  199649 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:10:23.109750  199649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1013 22:10:23.118585  199649 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1013 22:10:23.119757  199649 api_server.go:141] control plane version: v1.34.1
	I1013 22:10:23.119851  199649 api_server.go:131] duration metric: took 10.03817ms to wait for apiserver health ...
	I1013 22:10:23.119862  199649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:10:23.124402  199649 system_pods.go:59] 8 kube-system pods found
	I1013 22:10:23.124434  199649 system_pods.go:61] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:10:23.124442  199649 system_pods.go:61] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.124448  199649 system_pods.go:61] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.124453  199649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.124466  199649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.124471  199649 system_pods.go:61] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.124476  199649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.124483  199649 system_pods.go:61] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:10:23.124488  199649 system_pods.go:74] duration metric: took 4.620879ms to wait for pod list to return data ...
	I1013 22:10:23.124497  199649 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:10:23.130428  199649 default_sa.go:45] found service account: "default"
	I1013 22:10:23.130495  199649 default_sa.go:55] duration metric: took 5.990954ms for default service account to be created ...
	I1013 22:10:23.130517  199649 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:10:23.133978  199649 system_pods.go:86] 8 kube-system pods found
	I1013 22:10:23.134057  199649 system_pods.go:89] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:10:23.134095  199649 system_pods.go:89] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.134123  199649 system_pods.go:89] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.134140  199649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.134172  199649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.134194  199649 system_pods.go:89] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.134213  199649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.134248  199649 system_pods.go:89] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:10:23.134288  199649 retry.go:31] will retry after 286.62348ms: missing components: kube-dns
	I1013 22:10:23.429446  199649 system_pods.go:86] 8 kube-system pods found
	I1013 22:10:23.429532  199649 system_pods.go:89] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:10:23.429554  199649 system_pods.go:89] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.429590  199649 system_pods.go:89] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.429613  199649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.429643  199649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.429674  199649 system_pods.go:89] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.429697  199649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.429718  199649 system_pods.go:89] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:10:23.429763  199649 retry.go:31] will retry after 333.870213ms: missing components: kube-dns
	I1013 22:10:20.551352  204091 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:10:20.551722  204091 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-400889] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:10:21.735262  204091 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:10:21.735615  204091 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-400889] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:10:22.233744  204091 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:10:23.338248  204091 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:10:23.896344  204091 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:10:23.896592  204091 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:10:24.359449  204091 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:10:24.398217  204091 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:10:25.028569  204091 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:10:23.768817  199649 system_pods.go:86] 8 kube-system pods found
	I1013 22:10:23.768900  199649 system_pods.go:89] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Running
	I1013 22:10:23.768936  199649 system_pods.go:89] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.768962  199649 system_pods.go:89] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.768981  199649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.769014  199649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.769034  199649 system_pods.go:89] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.769052  199649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.769069  199649 system_pods.go:89] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Running
	I1013 22:10:23.769104  199649 system_pods.go:126] duration metric: took 638.567391ms to wait for k8s-apps to be running ...
	I1013 22:10:23.769129  199649 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:10:23.769217  199649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:10:23.802589  199649 system_svc.go:56] duration metric: took 33.438511ms WaitForService to wait for kubelet
	I1013 22:10:23.802665  199649 kubeadm.go:586] duration metric: took 42.204346882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:10:23.802700  199649 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:10:23.806266  199649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:10:23.806359  199649 node_conditions.go:123] node cpu capacity is 2
	I1013 22:10:23.806385  199649 node_conditions.go:105] duration metric: took 3.667862ms to run NodePressure ...
	I1013 22:10:23.806424  199649 start.go:241] waiting for startup goroutines ...
	I1013 22:10:23.806447  199649 start.go:246] waiting for cluster config update ...
	I1013 22:10:23.806470  199649 start.go:255] writing updated cluster config ...
	I1013 22:10:23.806824  199649 ssh_runner.go:195] Run: rm -f paused
	I1013 22:10:23.811446  199649 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:10:23.815305  199649 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vftdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.821940  199649 pod_ready.go:94] pod "coredns-66bc5c9577-vftdh" is "Ready"
	I1013 22:10:23.822016  199649 pod_ready.go:86] duration metric: took 6.641284ms for pod "coredns-66bc5c9577-vftdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.825093  199649 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.830582  199649 pod_ready.go:94] pod "etcd-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:23.830657  199649 pod_ready.go:86] duration metric: took 5.495073ms for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.833514  199649 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.838933  199649 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:23.839007  199649 pod_ready.go:86] duration metric: took 5.420202ms for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.841830  199649 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:24.216767  199649 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:24.216796  199649 pod_ready.go:86] duration metric: took 374.89771ms for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:24.417297  199649 pod_ready.go:83] waiting for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:24.816666  199649 pod_ready.go:94] pod "kube-proxy-5947n" is "Ready"
	I1013 22:10:24.816711  199649 pod_ready.go:86] duration metric: took 399.379483ms for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:25.016800  199649 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:25.416456  199649 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:25.416533  199649 pod_ready.go:86] duration metric: took 399.698638ms for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:25.416561  199649 pod_ready.go:40] duration metric: took 1.605047524s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:10:25.494414  199649 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:10:25.497611  199649 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-007533" cluster and "default" namespace by default
	I1013 22:10:26.075222  204091 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:10:26.976709  204091 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:10:26.977308  204091 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:10:26.979982  204091 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:10:26.982864  204091 out.go:252]   - Booting up control plane ...
	I1013 22:10:26.982971  204091 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:10:26.983054  204091 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:10:26.984686  204091 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:10:27.003293  204091 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:10:27.003654  204091 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:10:27.013101  204091 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:10:27.013456  204091 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:10:27.013811  204091 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:10:27.147076  204091 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:10:27.148469  204091 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:10:28.648302  204091 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50077576s
	I1013 22:10:28.651940  204091 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:10:28.652038  204091 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:10:28.652361  204091 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:10:28.652462  204091 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 13 22:10:23 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:23.265505056Z" level=info msg="Starting container: d64e702ca405ef3360befcc0a9170ab064e74748e876226ad2e3db7e1a1ab167" id=b2f258ae-b50e-4e66-9725-4e6978baa189 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:10:23 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:23.268490342Z" level=info msg="Started container" PID=1717 containerID=72fd8f4c241e7581495812385552130e87b95c317499df857351aba9637756ba description=kube-system/storage-provisioner/storage-provisioner id=584c154b-0837-4c7c-88b4-e22e66ae2751 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22c6da45c653a6474ad70ed4d7d74e159b59312deba0f0bd8fc4c0178c5f5ea1
	Oct 13 22:10:23 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:23.272234838Z" level=info msg="Started container" PID=1718 containerID=d64e702ca405ef3360befcc0a9170ab064e74748e876226ad2e3db7e1a1ab167 description=kube-system/coredns-66bc5c9577-vftdh/coredns id=b2f258ae-b50e-4e66-9725-4e6978baa189 name=/runtime.v1.RuntimeService/StartContainer sandboxID=12c7c698130f512253ca146489a9ba1f6d93b1d316c6c04a6032105a8433cbb1
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.112276149Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7a9eea32-2d53-41e6-bda5-e044e67ecf8c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.112347253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.117428297Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:89a2190fa8bce74a74197fade10f19ccdfd45f10001cfec3f95e9aaa27d20a2a UID:021a5a33-018f-4fda-8fd6-c390d49a3993 NetNS:/var/run/netns/78067aa4-e29e-460e-97ec-3045f5f61361 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078e18}] Aliases:map[]}"
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.117580605Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.134717885Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:89a2190fa8bce74a74197fade10f19ccdfd45f10001cfec3f95e9aaa27d20a2a UID:021a5a33-018f-4fda-8fd6-c390d49a3993 NetNS:/var/run/netns/78067aa4-e29e-460e-97ec-3045f5f61361 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078e18}] Aliases:map[]}"
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.134872621Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.140670628Z" level=info msg="Ran pod sandbox 89a2190fa8bce74a74197fade10f19ccdfd45f10001cfec3f95e9aaa27d20a2a with infra container: default/busybox/POD" id=7a9eea32-2d53-41e6-bda5-e044e67ecf8c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.142090358Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=10412f26-d12b-409c-b378-fa07a295006c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.142219873Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=10412f26-d12b-409c-b378-fa07a295006c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.142263794Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=10412f26-d12b-409c-b378-fa07a295006c name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.146149841Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=600b8160-2aec-4d59-bca4-1ca715d603e0 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:10:26 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:26.150419254Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.260437988Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=600b8160-2aec-4d59-bca4-1ca715d603e0 name=/runtime.v1.ImageService/PullImage
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.261657165Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=64ed0778-5f2b-4554-b0d4-dd2721e54895 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.268337586Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c816b0df-be13-410d-88af-eee492e47a48 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.275617344Z" level=info msg="Creating container: default/busybox/busybox" id=9e35d26e-2912-4e43-a978-a7e762355b3e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.277126097Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.282697665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.283333964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.301741366Z" level=info msg="Created container 2a427e00f8fb4095dbe6307c653c3f5b6fa10933da420ac87655f4a2b43e9cf7: default/busybox/busybox" id=9e35d26e-2912-4e43-a978-a7e762355b3e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.305176552Z" level=info msg="Starting container: 2a427e00f8fb4095dbe6307c653c3f5b6fa10933da420ac87655f4a2b43e9cf7" id=73ee566b-4410-4a49-9019-bcea709025f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:10:28 default-k8s-diff-port-007533 crio[838]: time="2025-10-13T22:10:28.309178363Z" level=info msg="Started container" PID=1773 containerID=2a427e00f8fb4095dbe6307c653c3f5b6fa10933da420ac87655f4a2b43e9cf7 description=default/busybox/busybox id=73ee566b-4410-4a49-9019-bcea709025f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=89a2190fa8bce74a74197fade10f19ccdfd45f10001cfec3f95e9aaa27d20a2a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2a427e00f8fb4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   89a2190fa8bce       busybox                                                default
	d64e702ca405e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   12c7c698130f5       coredns-66bc5c9577-vftdh                               kube-system
	72fd8f4c241e7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   22c6da45c653a       storage-provisioner                                    kube-system
	d40bc9be41fcc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   94ebe8a812318       kube-proxy-5947n                                       kube-system
	9b3c390f39fcc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   b3e13a808263e       kindnet-xvkwh                                          kube-system
	c9c8400cb90ad       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e490370fbb75f       kube-apiserver-default-k8s-diff-port-007533            kube-system
	7e331b8f7f555       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   0f5c986c05ff0       kube-controller-manager-default-k8s-diff-port-007533   kube-system
	1d648bb346ebd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   c5e946427b0ff       kube-scheduler-default-k8s-diff-port-007533            kube-system
	e219a49286708       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   b23dc57daf660       etcd-default-k8s-diff-port-007533                      kube-system
	
	
	==> coredns [d64e702ca405ef3360befcc0a9170ab064e74748e876226ad2e3db7e1a1ab167] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53737 - 38433 "HINFO IN 6859449826854678401.7617216936249142270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040629074s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-007533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-007533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=default-k8s-diff-port-007533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_09_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:09:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-007533
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:10:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:10:22 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:10:22 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:10:22 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:10:22 +0000   Mon, 13 Oct 2025 22:10:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-007533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 e26faf7d46cf467f898122046b66445d
	  System UUID:                31edf4b0-bfde-45c9-96bd-f89ce401d052
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-vftdh                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-007533                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-xvkwh                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-007533             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-007533    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-5947n                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-007533             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-007533 event: Registered Node default-k8s-diff-port-007533 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-007533 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	[Oct13 22:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e219a492867087f00a0e8f28ce1b71644d539c8e78d096d1b51b618a88fe3ab8] <==
	{"level":"warn","ts":"2025-10-13T22:09:31.886365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:31.909127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:31.932787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:31.957191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:31.967760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:31.980055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.009265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.021386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.037990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.072444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.091203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.124717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.171204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.185175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.210373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.228038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.240376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.259544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.299859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.301902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.318468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.363882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.368767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.391354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:09:32.468021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41106","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:10:35 up  1:52,  0 user,  load average: 3.15, 2.96, 2.35
	Linux default-k8s-diff-port-007533 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9b3c390f39fcc5474f1ff7b336e02f1183a1101c16ab8a39221ae991e45ac514] <==
	I1013 22:09:42.122895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:09:42.123194       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:09:42.123329       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:09:42.123344       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:09:42.123359       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:09:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:09:42.406844       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:09:42.406863       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:09:42.406871       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:09:42.407142       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:10:12.406707       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:10:12.406946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:10:12.407076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:10:12.408387       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 22:10:13.907022       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:10:13.907059       1 metrics.go:72] Registering metrics
	I1013 22:10:13.907129       1 controller.go:711] "Syncing nftables rules"
	I1013 22:10:22.413509       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:10:22.413571       1 main.go:301] handling current node
	I1013 22:10:32.408504       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:10:32.408587       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c9c8400cb90ad5d40d3931e069984798e7a0dd8449f82266ce1aee0c5df824b1] <==
	E1013 22:09:33.529449       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1013 22:09:33.533489       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1013 22:09:33.564077       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:09:33.564584       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 22:09:33.585917       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:09:33.586570       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:09:33.588847       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1013 22:09:33.754035       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:09:34.219418       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:09:34.228393       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:09:34.228484       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:09:34.951245       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:09:35.003237       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:09:35.136373       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:09:35.144820       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1013 22:09:35.146043       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:09:35.153969       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:09:35.360216       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:09:36.280698       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:09:36.309001       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:09:36.331204       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:09:41.215011       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:09:41.219679       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:09:41.365506       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:09:41.467507       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7e331b8f7f555c270a3e9b5e374a5dc24beb785824c0e1260f6a5b47fc636c41] <==
	I1013 22:09:40.384390       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:09:40.398738       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1013 22:09:40.400399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:09:40.406614       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:09:40.407854       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:09:40.407890       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1013 22:09:40.408060       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:09:40.408617       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:09:40.409520       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:09:40.409552       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:09:40.410694       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:09:40.410698       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:09:40.410788       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:09:40.412461       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:09:40.412585       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:09:40.412680       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-007533"
	I1013 22:09:40.412744       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 22:09:40.414437       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1013 22:09:40.414547       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1013 22:09:40.414624       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1013 22:09:40.414823       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1013 22:09:40.414892       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1013 22:09:40.416552       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:09:40.425000       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-007533" podCIDRs=["10.244.0.0/24"]
	I1013 22:10:25.423462       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d40bc9be41fcc70ef9a3ca43ae4778589aa002d029e91524ae09b1f15c0b44a1] <==
	I1013 22:09:42.117460       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:09:42.298049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:09:42.448563       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:09:42.448596       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:09:42.448688       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:09:42.559087       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:09:42.559138       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:09:42.572861       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:09:42.573182       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:09:42.573194       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:09:42.574277       1 config.go:200] "Starting service config controller"
	I1013 22:09:42.574289       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:09:42.585220       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:09:42.588012       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:09:42.588072       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:09:42.588077       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:09:42.611807       1 config.go:309] "Starting node config controller"
	I1013 22:09:42.611899       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:09:42.612508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:09:42.675878       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:09:42.688171       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:09:42.688287       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d648bb346ebdbb0fb7f70482b50eb097898517724eea3645353d79c91b352c5] <==
	I1013 22:09:34.136146       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:09:34.136174       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1013 22:09:34.137934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1013 22:09:34.138307       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:09:34.138369       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1013 22:09:34.147981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:09:34.148111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:09:34.148167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:09:34.148227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1013 22:09:34.148284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:09:34.148334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:09:34.150065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:09:34.150146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:09:34.150185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:09:34.150202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:09:34.150252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:09:34.151225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:09:34.151297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:09:34.151401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:09:34.155280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:09:34.155280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:09:34.155416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:09:34.155420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:09:35.014279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1013 22:09:37.836423       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:09:37 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:37.530322    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-007533" podStartSLOduration=1.530299528 podStartE2EDuration="1.530299528s" podCreationTimestamp="2025-10-13 22:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:09:37.514090379 +0000 UTC m=+1.362897681" watchObservedRunningTime="2025-10-13 22:09:37.530299528 +0000 UTC m=+1.379106829"
	Oct 13 22:09:40 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:40.440579    1294 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 13 22:09:40 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:40.441235    1294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.613893    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjvz\" (UniqueName: \"kubernetes.io/projected/ab2dd725-7a0d-4506-83a0-757e7277facc-kube-api-access-rkjvz\") pod \"kindnet-xvkwh\" (UID: \"ab2dd725-7a0d-4506-83a0-757e7277facc\") " pod="kube-system/kindnet-xvkwh"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.613969    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4m9r\" (UniqueName: \"kubernetes.io/projected/bd11df11-2e73-4ec6-a88a-4ac2faa19031-kube-api-access-k4m9r\") pod \"kube-proxy-5947n\" (UID: \"bd11df11-2e73-4ec6-a88a-4ac2faa19031\") " pod="kube-system/kube-proxy-5947n"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.614011    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ab2dd725-7a0d-4506-83a0-757e7277facc-cni-cfg\") pod \"kindnet-xvkwh\" (UID: \"ab2dd725-7a0d-4506-83a0-757e7277facc\") " pod="kube-system/kindnet-xvkwh"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.614036    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab2dd725-7a0d-4506-83a0-757e7277facc-xtables-lock\") pod \"kindnet-xvkwh\" (UID: \"ab2dd725-7a0d-4506-83a0-757e7277facc\") " pod="kube-system/kindnet-xvkwh"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.614057    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab2dd725-7a0d-4506-83a0-757e7277facc-lib-modules\") pod \"kindnet-xvkwh\" (UID: \"ab2dd725-7a0d-4506-83a0-757e7277facc\") " pod="kube-system/kindnet-xvkwh"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.614076    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd11df11-2e73-4ec6-a88a-4ac2faa19031-kube-proxy\") pod \"kube-proxy-5947n\" (UID: \"bd11df11-2e73-4ec6-a88a-4ac2faa19031\") " pod="kube-system/kube-proxy-5947n"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.614105    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd11df11-2e73-4ec6-a88a-4ac2faa19031-xtables-lock\") pod \"kube-proxy-5947n\" (UID: \"bd11df11-2e73-4ec6-a88a-4ac2faa19031\") " pod="kube-system/kube-proxy-5947n"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.614121    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd11df11-2e73-4ec6-a88a-4ac2faa19031-lib-modules\") pod \"kube-proxy-5947n\" (UID: \"bd11df11-2e73-4ec6-a88a-4ac2faa19031\") " pod="kube-system/kube-proxy-5947n"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:41.771027    1294 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 22:09:41 default-k8s-diff-port-007533 kubelet[1294]: W1013 22:09:41.846315    1294 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/crio-b3e13a808263ef0165d741e30e2a0e436da21bf884c24bffabdcb823436533e1 WatchSource:0}: Error finding container b3e13a808263ef0165d741e30e2a0e436da21bf884c24bffabdcb823436533e1: Status 404 returned error can't find the container with id b3e13a808263ef0165d741e30e2a0e436da21bf884c24bffabdcb823436533e1
	Oct 13 22:09:42 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:42.433926    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5947n" podStartSLOduration=1.4339093250000001 podStartE2EDuration="1.433909325s" podCreationTimestamp="2025-10-13 22:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:09:42.390287365 +0000 UTC m=+6.239094650" watchObservedRunningTime="2025-10-13 22:09:42.433909325 +0000 UTC m=+6.282716618"
	Oct 13 22:09:42 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:09:42.490173    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xvkwh" podStartSLOduration=1.490153447 podStartE2EDuration="1.490153447s" podCreationTimestamp="2025-10-13 22:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:09:42.434197096 +0000 UTC m=+6.283004381" watchObservedRunningTime="2025-10-13 22:09:42.490153447 +0000 UTC m=+6.338960740"
	Oct 13 22:10:22 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:22.809477    1294 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 13 22:10:22 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:22.939967    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8452dcd0-0fc3-4e41-8397-cafb1d9a184a-config-volume\") pod \"coredns-66bc5c9577-vftdh\" (UID: \"8452dcd0-0fc3-4e41-8397-cafb1d9a184a\") " pod="kube-system/coredns-66bc5c9577-vftdh"
	Oct 13 22:10:22 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:22.940189    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns942\" (UniqueName: \"kubernetes.io/projected/6082c077-fe34-4dcc-97c9-274f87bdef2a-kube-api-access-ns942\") pod \"storage-provisioner\" (UID: \"6082c077-fe34-4dcc-97c9-274f87bdef2a\") " pod="kube-system/storage-provisioner"
	Oct 13 22:10:22 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:22.940279    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnr8g\" (UniqueName: \"kubernetes.io/projected/8452dcd0-0fc3-4e41-8397-cafb1d9a184a-kube-api-access-jnr8g\") pod \"coredns-66bc5c9577-vftdh\" (UID: \"8452dcd0-0fc3-4e41-8397-cafb1d9a184a\") " pod="kube-system/coredns-66bc5c9577-vftdh"
	Oct 13 22:10:22 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:22.940355    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6082c077-fe34-4dcc-97c9-274f87bdef2a-tmp\") pod \"storage-provisioner\" (UID: \"6082c077-fe34-4dcc-97c9-274f87bdef2a\") " pod="kube-system/storage-provisioner"
	Oct 13 22:10:23 default-k8s-diff-port-007533 kubelet[1294]: W1013 22:10:23.184763    1294 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/crio-22c6da45c653a6474ad70ed4d7d74e159b59312deba0f0bd8fc4c0178c5f5ea1 WatchSource:0}: Error finding container 22c6da45c653a6474ad70ed4d7d74e159b59312deba0f0bd8fc4c0178c5f5ea1: Status 404 returned error can't find the container with id 22c6da45c653a6474ad70ed4d7d74e159b59312deba0f0bd8fc4c0178c5f5ea1
	Oct 13 22:10:23 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:23.504301    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.504280469 podStartE2EDuration="41.504280469s" podCreationTimestamp="2025-10-13 22:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:10:23.487069402 +0000 UTC m=+47.335876695" watchObservedRunningTime="2025-10-13 22:10:23.504280469 +0000 UTC m=+47.353087762"
	Oct 13 22:10:25 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:25.801400    1294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vftdh" podStartSLOduration=44.801376914 podStartE2EDuration="44.801376914s" podCreationTimestamp="2025-10-13 22:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:10:23.505042722 +0000 UTC m=+47.353850015" watchObservedRunningTime="2025-10-13 22:10:25.801376914 +0000 UTC m=+49.650184199"
	Oct 13 22:10:25 default-k8s-diff-port-007533 kubelet[1294]: I1013 22:10:25.959360    1294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65wg\" (UniqueName: \"kubernetes.io/projected/021a5a33-018f-4fda-8fd6-c390d49a3993-kube-api-access-q65wg\") pod \"busybox\" (UID: \"021a5a33-018f-4fda-8fd6-c390d49a3993\") " pod="default/busybox"
	Oct 13 22:10:26 default-k8s-diff-port-007533 kubelet[1294]: W1013 22:10:26.141134    1294 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/crio-89a2190fa8bce74a74197fade10f19ccdfd45f10001cfec3f95e9aaa27d20a2a WatchSource:0}: Error finding container 89a2190fa8bce74a74197fade10f19ccdfd45f10001cfec3f95e9aaa27d20a2a: Status 404 returned error can't find the container with id 89a2190fa8bce74a74197fade10f19ccdfd45f10001cfec3f95e9aaa27d20a2a
	
	
	==> storage-provisioner [72fd8f4c241e7581495812385552130e87b95c317499df857351aba9637756ba] <==
	I1013 22:10:23.314836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:10:23.368833       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:10:23.368949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:10:23.371295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:23.376961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:10:23.377175       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:10:23.377360       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-007533_428a71eb-3f2c-4525-a54c-34ac196e6dd3!
	I1013 22:10:23.379514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a74fbebd-1296-493d-a460-f6003ff9a0e7", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-007533_428a71eb-3f2c-4525-a54c-34ac196e6dd3 became leader
	W1013 22:10:23.384343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:23.394527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:10:23.483872       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-007533_428a71eb-3f2c-4525-a54c-34ac196e6dd3!
	W1013 22:10:25.399465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:25.409110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:27.411942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:27.417277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:29.420901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:29.428855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:31.432580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:31.439987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:33.443313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:33.447766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:35.453858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:10:35.461363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-007533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.907977ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:10:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-400889
helpers_test.go:243: (dbg) docker inspect newest-cni-400889:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda",
	        "Created": "2025-10-13T22:10:05.991697046Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:10:06.062503316Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/hostname",
	        "HostsPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/hosts",
	        "LogPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda-json.log",
	        "Name": "/newest-cni-400889",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-400889:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-400889",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda",
	                "LowerDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-400889",
	                "Source": "/var/lib/docker/volumes/newest-cni-400889/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-400889",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-400889",
	                "name.minikube.sigs.k8s.io": "newest-cni-400889",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0dea0335ce9792ea7ad270bc7939e2967d95c8366682c24223fa9b9dc4ecd1c2",
	            "SandboxKey": "/var/run/docker/netns/0dea0335ce97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-400889": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:54:86:a7:2c:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d596263e55a2c1a0ad1158c1d748ddecdc9ebcca3cfd3b93c9472d82661a4237",
	                    "EndpointID": "436c8f892ff260db9753f6ade6fa4c8eb115b1ac89caf20d9b55f4a7c243382d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-400889",
	                        "327a4b5bba33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-400889 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-400889 logs -n 25: (1.077028277s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:06 UTC │ 13 Oct 25 22:07 UTC │
	│ delete  │ -p old-k8s-version-061725                                                                                                                                                                                                                     │ old-k8s-version-061725       │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p no-preload-998398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │                     │
	│ stop    │ -p no-preload-998398 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ addons  │ enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:07 UTC │
	│ start   │ -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:07 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:10 UTC │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-007533 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:10:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:10:00.478169  204091 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:10:00.478494  204091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:00.478509  204091 out.go:374] Setting ErrFile to fd 2...
	I1013 22:10:00.478515  204091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:00.478906  204091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:10:00.479722  204091 out.go:368] Setting JSON to false
	I1013 22:10:00.483577  204091 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6735,"bootTime":1760386666,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:10:00.484073  204091 start.go:141] virtualization:  
	I1013 22:10:00.488639  204091 out.go:179] * [newest-cni-400889] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:10:00.493601  204091 notify.go:220] Checking for updates...
	I1013 22:10:00.493608  204091 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:10:00.497329  204091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:10:00.500918  204091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:00.504470  204091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:10:00.507944  204091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:10:00.511642  204091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:10:00.517656  204091 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:00.517934  204091 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:10:00.550819  204091 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:10:00.551000  204091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:00.619311  204091 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:10:00.609250765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:00.619444  204091 docker.go:318] overlay module found
	I1013 22:10:00.622730  204091 out.go:179] * Using the docker driver based on user configuration
	I1013 22:10:00.625674  204091 start.go:305] selected driver: docker
	I1013 22:10:00.625699  204091 start.go:925] validating driver "docker" against <nil>
	I1013 22:10:00.625715  204091 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:10:00.626519  204091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:00.686470  204091 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 22:10:00.676519304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:00.686636  204091 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1013 22:10:00.686676  204091 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1013 22:10:00.686915  204091 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:10:00.690197  204091 out.go:179] * Using Docker driver with root privileges
	I1013 22:10:00.693035  204091 cni.go:84] Creating CNI manager for ""
	I1013 22:10:00.693113  204091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:00.693126  204091 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:10:00.693216  204091 start.go:349] cluster config:
	{Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:00.698480  204091 out.go:179] * Starting "newest-cni-400889" primary control-plane node in "newest-cni-400889" cluster
	I1013 22:10:00.701301  204091 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:10:00.704432  204091 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:10:00.707326  204091 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:00.707392  204091 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:10:00.707407  204091 cache.go:58] Caching tarball of preloaded images
	I1013 22:10:00.707427  204091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:10:00.707503  204091 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:10:00.707515  204091 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:10:00.707638  204091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json ...
	I1013 22:10:00.707668  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json: {Name:mk3eb9ca30ea512cd35124a1e85c1aa47db49843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:00.729244  204091 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:10:00.729278  204091 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:10:00.729309  204091 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:10:00.729344  204091 start.go:360] acquireMachinesLock for newest-cni-400889: {Name:mk77b7fd736221bee1ff61b7d071134f1c9c511b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:10:00.729471  204091 start.go:364] duration metric: took 100.666µs to acquireMachinesLock for "newest-cni-400889"
	I1013 22:10:00.729505  204091 start.go:93] Provisioning new machine with config: &{Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:10:00.729578  204091 start.go:125] createHost starting for "" (driver="docker")
	W1013 22:09:58.593856  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:00.594587  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:03.094421  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:00.733173  204091 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:10:00.733432  204091 start.go:159] libmachine.API.Create for "newest-cni-400889" (driver="docker")
	I1013 22:10:00.733487  204091 client.go:168] LocalClient.Create starting
	I1013 22:10:00.733585  204091 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:10:00.733627  204091 main.go:141] libmachine: Decoding PEM data...
	I1013 22:10:00.733640  204091 main.go:141] libmachine: Parsing certificate...
	I1013 22:10:00.733702  204091 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:10:00.733721  204091 main.go:141] libmachine: Decoding PEM data...
	I1013 22:10:00.733731  204091 main.go:141] libmachine: Parsing certificate...
	I1013 22:10:00.734134  204091 cli_runner.go:164] Run: docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:10:00.751316  204091 cli_runner.go:211] docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:10:00.751399  204091 network_create.go:284] running [docker network inspect newest-cni-400889] to gather additional debugging logs...
	I1013 22:10:00.751421  204091 cli_runner.go:164] Run: docker network inspect newest-cni-400889
	W1013 22:10:00.768470  204091 cli_runner.go:211] docker network inspect newest-cni-400889 returned with exit code 1
	I1013 22:10:00.768504  204091 network_create.go:287] error running [docker network inspect newest-cni-400889]: docker network inspect newest-cni-400889: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-400889 not found
	I1013 22:10:00.768518  204091 network_create.go:289] output of [docker network inspect newest-cni-400889]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-400889 not found
	
	** /stderr **
	I1013 22:10:00.768624  204091 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:00.786382  204091 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:10:00.786735  204091 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:10:00.787146  204091 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:10:00.787464  204091 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c207adec0a14 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:30:41:df:49:ee} reservation:<nil>}
	I1013 22:10:00.788111  204091 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ef050}
	I1013 22:10:00.788139  204091 network_create.go:124] attempt to create docker network newest-cni-400889 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:10:00.788221  204091 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-400889 newest-cni-400889
	I1013 22:10:00.852515  204091 network_create.go:108] docker network newest-cni-400889 192.168.85.0/24 created
	I1013 22:10:00.852546  204091 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-400889" container
	I1013 22:10:00.852628  204091 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:10:00.869480  204091 cli_runner.go:164] Run: docker volume create newest-cni-400889 --label name.minikube.sigs.k8s.io=newest-cni-400889 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:10:00.890249  204091 oci.go:103] Successfully created a docker volume newest-cni-400889
	I1013 22:10:00.890352  204091 cli_runner.go:164] Run: docker run --rm --name newest-cni-400889-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-400889 --entrypoint /usr/bin/test -v newest-cni-400889:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:10:01.456519  204091 oci.go:107] Successfully prepared a docker volume newest-cni-400889
	I1013 22:10:01.456592  204091 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:01.456655  204091 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:10:01.456745  204091 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-400889:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 22:10:05.095953  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:07.594208  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:05.916860  204091 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-400889:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.460063699s)
	I1013 22:10:05.916901  204091 kic.go:203] duration metric: took 4.460243296s to extract preloaded images to volume ...
	W1013 22:10:05.917038  204091 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:10:05.917156  204091 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:10:05.977342  204091 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-400889 --name newest-cni-400889 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-400889 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-400889 --network newest-cni-400889 --ip 192.168.85.2 --volume newest-cni-400889:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:10:06.285949  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Running}}
	I1013 22:10:06.309737  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:06.335916  204091 cli_runner.go:164] Run: docker exec newest-cni-400889 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:10:06.406554  204091 oci.go:144] the created container "newest-cni-400889" has a running status.
	I1013 22:10:06.406598  204091 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa...
	I1013 22:10:06.716496  204091 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:10:06.749257  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:06.775300  204091 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:10:06.775332  204091 kic_runner.go:114] Args: [docker exec --privileged newest-cni-400889 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:10:06.861825  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:06.887503  204091 machine.go:93] provisionDockerMachine start ...
	I1013 22:10:06.887593  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:06.916071  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:06.916496  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:06.916530  204091 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:10:06.917268  204091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:10:10.075995  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:10.076023  204091 ubuntu.go:182] provisioning hostname "newest-cni-400889"
	I1013 22:10:10.076089  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:10.095298  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:10.095630  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:10.095648  204091 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-400889 && echo "newest-cni-400889" | sudo tee /etc/hostname
	I1013 22:10:10.255418  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:10.255539  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:10.273796  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:10.274111  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:10.274134  204091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-400889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-400889/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-400889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:10:10.420331  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:10:10.420356  204091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:10:10.420375  204091 ubuntu.go:190] setting up certificates
	I1013 22:10:10.420427  204091 provision.go:84] configureAuth start
	I1013 22:10:10.420503  204091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:10.440228  204091 provision.go:143] copyHostCerts
	I1013 22:10:10.440311  204091 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:10:10.440331  204091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:10:10.440495  204091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:10:10.440670  204091 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:10:10.440688  204091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:10:10.440727  204091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:10:10.440807  204091 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:10:10.440822  204091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:10:10.440858  204091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:10:10.440920  204091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.newest-cni-400889 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-400889]
	I1013 22:10:11.061433  204091 provision.go:177] copyRemoteCerts
	I1013 22:10:11.061500  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:10:11.061547  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.080534  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.187588  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:10:11.207134  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:10:11.225788  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:10:11.245433  204091 provision.go:87] duration metric: took 824.975589ms to configureAuth
	I1013 22:10:11.245458  204091 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:10:11.245657  204091 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:11.245762  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.263193  204091 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:11.263502  204091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1013 22:10:11.263516  204091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:10:11.524286  204091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:10:11.524313  204091 machine.go:96] duration metric: took 4.636789761s to provisionDockerMachine
	I1013 22:10:11.524323  204091 client.go:171] duration metric: took 10.790826534s to LocalClient.Create
	I1013 22:10:11.524336  204091 start.go:167] duration metric: took 10.790906262s to libmachine.API.Create "newest-cni-400889"
	I1013 22:10:11.524344  204091 start.go:293] postStartSetup for "newest-cni-400889" (driver="docker")
	I1013 22:10:11.524353  204091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:10:11.524425  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:10:11.524487  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.545539  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.647997  204091 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:10:11.651399  204091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:10:11.651431  204091 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:10:11.651442  204091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:10:11.651497  204091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:10:11.651596  204091 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:10:11.651708  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:10:11.659508  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:11.677502  204091 start.go:296] duration metric: took 153.144333ms for postStartSetup
	I1013 22:10:11.677899  204091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:11.694488  204091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json ...
	I1013 22:10:11.694795  204091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:10:11.694844  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.712165  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.816707  204091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:10:11.821621  204091 start.go:128] duration metric: took 11.092028421s to createHost
	I1013 22:10:11.821642  204091 start.go:83] releasing machines lock for "newest-cni-400889", held for 11.092156007s
	I1013 22:10:11.821707  204091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:11.838654  204091 ssh_runner.go:195] Run: cat /version.json
	I1013 22:10:11.838721  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.838971  204091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:10:11.839035  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:11.871593  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.881371  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:11.975493  204091 ssh_runner.go:195] Run: systemctl --version
	I1013 22:10:12.073510  204091 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:10:12.124864  204091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:10:12.129297  204091 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:10:12.129364  204091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:10:12.165974  204091 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:10:12.165994  204091 start.go:495] detecting cgroup driver to use...
	I1013 22:10:12.166026  204091 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:10:12.166100  204091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:10:12.185037  204091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:10:12.199961  204091 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:10:12.200079  204091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:10:12.218559  204091 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:10:12.237516  204091 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:10:12.364415  204091 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:10:12.482792  204091 docker.go:234] disabling docker service ...
	I1013 22:10:12.482886  204091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:10:12.504826  204091 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:10:12.520475  204091 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:10:12.660393  204091 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:10:12.784225  204091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:10:12.797294  204091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:10:12.813993  204091 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:10:12.814105  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.823579  204091 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:10:12.823697  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.832685  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.841870  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.854931  204091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:10:12.864132  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.873908  204091 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.887412  204091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:12.896891  204091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:10:12.904587  204091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:10:12.911977  204091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:13.032106  204091 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:10:13.171099  204091 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:10:13.171172  204091 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:10:13.175201  204091 start.go:563] Will wait 60s for crictl version
	I1013 22:10:13.175265  204091 ssh_runner.go:195] Run: which crictl
	I1013 22:10:13.178924  204091 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:10:13.203885  204091 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:10:13.203969  204091 ssh_runner.go:195] Run: crio --version
	I1013 22:10:13.232233  204091 ssh_runner.go:195] Run: crio --version
	I1013 22:10:13.266856  204091 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:10:13.269668  204091 cli_runner.go:164] Run: docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:13.285346  204091 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:10:13.289019  204091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:13.301034  204091 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1013 22:10:09.594370  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:12.093214  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:13.303883  204091 kubeadm.go:883] updating cluster {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:10:13.304027  204091 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:13.304107  204091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:13.344245  204091 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:13.344270  204091 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:10:13.344331  204091 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:13.369734  204091 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:13.369759  204091 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:10:13.369767  204091 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:10:13.369897  204091 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-400889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:10:13.369994  204091 ssh_runner.go:195] Run: crio config
	I1013 22:10:13.429277  204091 cni.go:84] Creating CNI manager for ""
	I1013 22:10:13.429303  204091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:13.429317  204091 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 22:10:13.429341  204091 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-400889 NodeName:newest-cni-400889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:10:13.429470  204091 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-400889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:10:13.429543  204091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:10:13.437764  204091 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:10:13.437886  204091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:10:13.446359  204091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:10:13.459385  204091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:10:13.473876  204091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 22:10:13.487277  204091 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:10:13.490757  204091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:13.500625  204091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:13.617143  204091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:13.633860  204091 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889 for IP: 192.168.85.2
	I1013 22:10:13.633895  204091 certs.go:195] generating shared ca certs ...
	I1013 22:10:13.633911  204091 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:13.634078  204091 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:10:13.634143  204091 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:10:13.634156  204091 certs.go:257] generating profile certs ...
	I1013 22:10:13.634231  204091 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key
	I1013 22:10:13.635081  204091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.crt with IP's: []
	I1013 22:10:14.490604  204091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.crt ...
	I1013 22:10:14.490643  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.crt: {Name:mkf6565e43552edf412ea6ea3109c96d0aa4ca13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.490833  204091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key ...
	I1013 22:10:14.490850  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key: {Name:mk2153306a80c6b7a2366c6828c9b73ef42b023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.490928  204091 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4
	I1013 22:10:14.490949  204091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:10:14.801968  204091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4 ...
	I1013 22:10:14.801997  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4: {Name:mka1d1df894ff9c02f408a5e94952ede7bd4010b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.802178  204091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4 ...
	I1013 22:10:14.802194  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4: {Name:mk7b0b54a26c406e8fcba65457db9ade23c57492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:14.802288  204091 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt.58b80bf4 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt
	I1013 22:10:14.802369  204091 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4 -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key
	I1013 22:10:14.802431  204091 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key
	I1013 22:10:14.802452  204091 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt with IP's: []
	I1013 22:10:15.206667  204091 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt ...
	I1013 22:10:15.206695  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt: {Name:mk8acecebd4a120699d4dda0449e92296a18736b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:15.206895  204091 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key ...
	I1013 22:10:15.206913  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key: {Name:mke2d5fb98c5f0ae744d22d1e1540598f99bd4f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:15.207125  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:10:15.207172  204091 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:10:15.207187  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:10:15.207211  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:10:15.207238  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:10:15.207264  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:10:15.207313  204091 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:15.207891  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:10:15.226226  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:10:15.244260  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:10:15.261702  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:10:15.279294  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:10:15.298584  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:10:15.317658  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:10:15.340079  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:10:15.358089  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:10:15.376971  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:10:15.395972  204091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:10:15.414043  204091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:10:15.427013  204091 ssh_runner.go:195] Run: openssl version
	I1013 22:10:15.433312  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:10:15.442079  204091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:10:15.445841  204091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:10:15.445949  204091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:10:15.488637  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:10:15.497110  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:10:15.505620  204091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:15.509572  204091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:15.509678  204091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:15.551108  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:10:15.559681  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:10:15.568379  204091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:10:15.572142  204091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:10:15.572207  204091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:10:15.613942  204091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:10:15.622324  204091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:10:15.626056  204091 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:10:15.626124  204091 kubeadm.go:400] StartCluster: {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:15.626207  204091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:10:15.626277  204091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:10:15.658266  204091 cri.go:89] found id: ""
	I1013 22:10:15.658334  204091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:10:15.666092  204091 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:10:15.673870  204091 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:10:15.673934  204091 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:10:15.681973  204091 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:10:15.681995  204091 kubeadm.go:157] found existing configuration files:
	
	I1013 22:10:15.682055  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:10:15.690198  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:10:15.690277  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:10:15.697853  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:10:15.706252  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:10:15.706315  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:10:15.713607  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:10:15.721545  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:10:15.721610  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:10:15.728937  204091 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:10:15.736815  204091 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:10:15.736901  204091 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:10:15.744505  204091 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:10:15.788211  204091 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:10:15.788538  204091 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:10:15.819766  204091 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:10:15.819896  204091 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:10:15.819940  204091 kubeadm.go:318] OS: Linux
	I1013 22:10:15.819992  204091 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:10:15.820046  204091 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:10:15.820099  204091 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:10:15.820156  204091 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:10:15.820210  204091 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:10:15.820264  204091 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:10:15.820314  204091 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:10:15.820368  204091 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:10:15.820432  204091 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:10:15.913822  204091 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:10:15.913965  204091 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:10:15.914078  204091 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:10:15.928259  204091 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1013 22:10:14.096476  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:16.593738  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:15.931716  204091 out.go:252]   - Generating certificates and keys ...
	I1013 22:10:15.931834  204091 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:10:15.931909  204091 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:10:17.023507  204091 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:10:17.314418  204091 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:10:18.621107  204091 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:10:18.853521  204091 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1013 22:10:18.594353  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	W1013 22:10:21.094148  199649 node_ready.go:57] node "default-k8s-diff-port-007533" has "Ready":"False" status (will retry)
	I1013 22:10:23.097090  199649 node_ready.go:49] node "default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:23.097118  199649 node_ready.go:38] duration metric: took 40.506685569s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:10:23.097132  199649 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:10:23.097190  199649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:10:23.109710  199649 api_server.go:72] duration metric: took 41.511387878s to wait for apiserver process to appear ...
	I1013 22:10:23.109732  199649 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:10:23.109750  199649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1013 22:10:23.118585  199649 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1013 22:10:23.119757  199649 api_server.go:141] control plane version: v1.34.1
	I1013 22:10:23.119851  199649 api_server.go:131] duration metric: took 10.03817ms to wait for apiserver health ...
	I1013 22:10:23.119862  199649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:10:23.124402  199649 system_pods.go:59] 8 kube-system pods found
	I1013 22:10:23.124434  199649 system_pods.go:61] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:10:23.124442  199649 system_pods.go:61] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.124448  199649 system_pods.go:61] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.124453  199649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.124466  199649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.124471  199649 system_pods.go:61] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.124476  199649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.124483  199649 system_pods.go:61] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:10:23.124488  199649 system_pods.go:74] duration metric: took 4.620879ms to wait for pod list to return data ...
	I1013 22:10:23.124497  199649 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:10:23.130428  199649 default_sa.go:45] found service account: "default"
	I1013 22:10:23.130495  199649 default_sa.go:55] duration metric: took 5.990954ms for default service account to be created ...
	I1013 22:10:23.130517  199649 system_pods.go:116] waiting for k8s-apps to be running ...
	I1013 22:10:23.133978  199649 system_pods.go:86] 8 kube-system pods found
	I1013 22:10:23.134057  199649 system_pods.go:89] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:10:23.134095  199649 system_pods.go:89] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.134123  199649 system_pods.go:89] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.134140  199649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.134172  199649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.134194  199649 system_pods.go:89] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.134213  199649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.134248  199649 system_pods.go:89] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:10:23.134288  199649 retry.go:31] will retry after 286.62348ms: missing components: kube-dns
	I1013 22:10:23.429446  199649 system_pods.go:86] 8 kube-system pods found
	I1013 22:10:23.429532  199649 system_pods.go:89] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1013 22:10:23.429554  199649 system_pods.go:89] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.429590  199649 system_pods.go:89] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.429613  199649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.429643  199649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.429674  199649 system_pods.go:89] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.429697  199649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.429718  199649 system_pods.go:89] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1013 22:10:23.429763  199649 retry.go:31] will retry after 333.870213ms: missing components: kube-dns
	I1013 22:10:20.551352  204091 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:10:20.551722  204091 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-400889] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:10:21.735262  204091 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:10:21.735615  204091 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-400889] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:10:22.233744  204091 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:10:23.338248  204091 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:10:23.896344  204091 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:10:23.896592  204091 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:10:24.359449  204091 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:10:24.398217  204091 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:10:25.028569  204091 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:10:23.768817  199649 system_pods.go:86] 8 kube-system pods found
	I1013 22:10:23.768900  199649 system_pods.go:89] "coredns-66bc5c9577-vftdh" [8452dcd0-0fc3-4e41-8397-cafb1d9a184a] Running
	I1013 22:10:23.768936  199649 system_pods.go:89] "etcd-default-k8s-diff-port-007533" [5e906fd0-4bfb-4e7c-a05c-a490a92bc11f] Running
	I1013 22:10:23.768962  199649 system_pods.go:89] "kindnet-xvkwh" [ab2dd725-7a0d-4506-83a0-757e7277facc] Running
	I1013 22:10:23.768981  199649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-007533" [e022c2e1-54c5-444b-a1fd-06f542fc4b82] Running
	I1013 22:10:23.769014  199649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-007533" [872726a7-7066-467d-a227-3a381c0a40a3] Running
	I1013 22:10:23.769034  199649 system_pods.go:89] "kube-proxy-5947n" [bd11df11-2e73-4ec6-a88a-4ac2faa19031] Running
	I1013 22:10:23.769052  199649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-007533" [b816e0be-db33-44df-b7d6-366c823e1c25] Running
	I1013 22:10:23.769069  199649 system_pods.go:89] "storage-provisioner" [6082c077-fe34-4dcc-97c9-274f87bdef2a] Running
	I1013 22:10:23.769104  199649 system_pods.go:126] duration metric: took 638.567391ms to wait for k8s-apps to be running ...
	I1013 22:10:23.769129  199649 system_svc.go:44] waiting for kubelet service to be running ....
	I1013 22:10:23.769217  199649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:10:23.802589  199649 system_svc.go:56] duration metric: took 33.438511ms WaitForService to wait for kubelet
	I1013 22:10:23.802665  199649 kubeadm.go:586] duration metric: took 42.204346882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:10:23.802700  199649 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:10:23.806266  199649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:10:23.806359  199649 node_conditions.go:123] node cpu capacity is 2
	I1013 22:10:23.806385  199649 node_conditions.go:105] duration metric: took 3.667862ms to run NodePressure ...
	I1013 22:10:23.806424  199649 start.go:241] waiting for startup goroutines ...
	I1013 22:10:23.806447  199649 start.go:246] waiting for cluster config update ...
	I1013 22:10:23.806470  199649 start.go:255] writing updated cluster config ...
	I1013 22:10:23.806824  199649 ssh_runner.go:195] Run: rm -f paused
	I1013 22:10:23.811446  199649 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:10:23.815305  199649 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vftdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.821940  199649 pod_ready.go:94] pod "coredns-66bc5c9577-vftdh" is "Ready"
	I1013 22:10:23.822016  199649 pod_ready.go:86] duration metric: took 6.641284ms for pod "coredns-66bc5c9577-vftdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.825093  199649 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.830582  199649 pod_ready.go:94] pod "etcd-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:23.830657  199649 pod_ready.go:86] duration metric: took 5.495073ms for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.833514  199649 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.838933  199649 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:23.839007  199649 pod_ready.go:86] duration metric: took 5.420202ms for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:23.841830  199649 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:24.216767  199649 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:24.216796  199649 pod_ready.go:86] duration metric: took 374.89771ms for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:24.417297  199649 pod_ready.go:83] waiting for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:24.816666  199649 pod_ready.go:94] pod "kube-proxy-5947n" is "Ready"
	I1013 22:10:24.816711  199649 pod_ready.go:86] duration metric: took 399.379483ms for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:25.016800  199649 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:25.416456  199649 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-007533" is "Ready"
	I1013 22:10:25.416533  199649 pod_ready.go:86] duration metric: took 399.698638ms for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:10:25.416561  199649 pod_ready.go:40] duration metric: took 1.605047524s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:10:25.494414  199649 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:10:25.497611  199649 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-007533" cluster and "default" namespace by default
	I1013 22:10:26.075222  204091 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:10:26.976709  204091 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:10:26.977308  204091 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:10:26.979982  204091 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1013 22:10:26.982864  204091 out.go:252]   - Booting up control plane ...
	I1013 22:10:26.982971  204091 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:10:26.983054  204091 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:10:26.984686  204091 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:10:27.003293  204091 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:10:27.003654  204091 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:10:27.013101  204091 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:10:27.013456  204091 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:10:27.013811  204091 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:10:27.147076  204091 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:10:27.148469  204091 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:10:28.648302  204091 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50077576s
	I1013 22:10:28.651940  204091 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:10:28.652038  204091 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:10:28.652361  204091 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:10:28.652462  204091 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:10:30.628743  204091 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.976361911s
	I1013 22:10:33.054130  204091 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.402178176s
	I1013 22:10:35.157229  204091 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.505080009s
	I1013 22:10:35.182349  204091 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:10:35.201023  204091 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:10:35.230488  204091 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:10:35.230978  204091 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-400889 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:10:35.245631  204091 kubeadm.go:318] [bootstrap-token] Using token: y615c7.k0l7bjjszu5e7gq4
	I1013 22:10:35.248715  204091 out.go:252]   - Configuring RBAC rules ...
	I1013 22:10:35.248844  204091 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:10:35.255531  204091 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:10:35.273244  204091 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:10:35.287099  204091 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:10:35.295147  204091 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:10:35.305130  204091 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:10:35.564983  204091 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:10:36.005468  204091 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:10:36.592541  204091 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:10:36.597114  204091 kubeadm.go:318] 
	I1013 22:10:36.597195  204091 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:10:36.597201  204091 kubeadm.go:318] 
	I1013 22:10:36.597282  204091 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:10:36.597287  204091 kubeadm.go:318] 
	I1013 22:10:36.597313  204091 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:10:36.597375  204091 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:10:36.597427  204091 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:10:36.597432  204091 kubeadm.go:318] 
	I1013 22:10:36.597488  204091 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:10:36.597493  204091 kubeadm.go:318] 
	I1013 22:10:36.597543  204091 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:10:36.597548  204091 kubeadm.go:318] 
	I1013 22:10:36.597603  204091 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:10:36.597683  204091 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:10:36.597755  204091 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:10:36.597759  204091 kubeadm.go:318] 
	I1013 22:10:36.597892  204091 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:10:36.597975  204091 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:10:36.597980  204091 kubeadm.go:318] 
	I1013 22:10:36.598068  204091 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token y615c7.k0l7bjjszu5e7gq4 \
	I1013 22:10:36.598176  204091 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:10:36.598200  204091 kubeadm.go:318] 	--control-plane 
	I1013 22:10:36.598204  204091 kubeadm.go:318] 
	I1013 22:10:36.598293  204091 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:10:36.598298  204091 kubeadm.go:318] 
	I1013 22:10:36.598383  204091 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token y615c7.k0l7bjjszu5e7gq4 \
	I1013 22:10:36.598490  204091 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:10:36.608329  204091 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:10:36.608615  204091 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:10:36.608737  204091 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:10:36.608825  204091 cni.go:84] Creating CNI manager for ""
	I1013 22:10:36.608836  204091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:36.612083  204091 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:10:36.615431  204091 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:10:36.630993  204091 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:10:36.631011  204091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:10:36.657597  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:10:37.155308  204091 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:10:37.155527  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:37.155614  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-400889 minikube.k8s.io/updated_at=2025_10_13T22_10_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=newest-cni-400889 minikube.k8s.io/primary=true
	I1013 22:10:37.170269  204091 ops.go:34] apiserver oom_adj: -16
	I1013 22:10:37.505725  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:38.006375  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:38.506691  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:39.007282  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:39.506518  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:40.008592  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:40.506371  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:41.006497  204091 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:10:41.115401  204091 kubeadm.go:1113] duration metric: took 3.959939287s to wait for elevateKubeSystemPrivileges
	I1013 22:10:41.115427  204091 kubeadm.go:402] duration metric: took 25.489308478s to StartCluster
	I1013 22:10:41.115444  204091 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:41.115503  204091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:41.116488  204091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:41.116696  204091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:10:41.116836  204091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:10:41.117072  204091 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:41.117102  204091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:10:41.117157  204091 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-400889"
	I1013 22:10:41.117170  204091 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-400889"
	I1013 22:10:41.117189  204091 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:41.118046  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:41.118095  204091 addons.go:69] Setting default-storageclass=true in profile "newest-cni-400889"
	I1013 22:10:41.118487  204091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-400889"
	I1013 22:10:41.119417  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:41.119876  204091 out.go:179] * Verifying Kubernetes components...
	I1013 22:10:41.123666  204091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:41.166126  204091 addons.go:238] Setting addon default-storageclass=true in "newest-cni-400889"
	I1013 22:10:41.166163  204091 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:41.166741  204091 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:41.174492  204091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:10:41.179402  204091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:41.179425  204091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:10:41.179488  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:41.221296  204091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:10:41.221318  204091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:10:41.221378  204091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:41.221681  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:41.249195  204091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:41.454375  204091 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:10:41.454587  204091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:41.553340  204091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:41.556600  204091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:10:41.854559  204091 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 22:10:41.856559  204091 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:10:41.856780  204091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:10:42.142680  204091 api_server.go:72] duration metric: took 1.025952785s to wait for apiserver process to appear ...
	I1013 22:10:42.142714  204091 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:10:42.142738  204091 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:10:42.161764  204091 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:10:42.164249  204091 api_server.go:141] control plane version: v1.34.1
	I1013 22:10:42.164359  204091 api_server.go:131] duration metric: took 21.634945ms to wait for apiserver health ...
	I1013 22:10:42.164386  204091 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:10:42.170688  204091 system_pods.go:59] 8 kube-system pods found
	I1013 22:10:42.170799  204091 system_pods.go:61] "coredns-66bc5c9577-cc4wf" [0bf2694d-f251-4b5b-86fc-6dfc45fe88c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:10:42.170863  204091 system_pods.go:61] "etcd-newest-cni-400889" [67dc0b91-0ac5-4923-a944-5f2dd99ad833] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:10:42.170898  204091 system_pods.go:61] "kindnet-k8zlc" [bce90592-0127-4946-bc83-a6b06490dcc1] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1013 22:10:42.170921  204091 system_pods.go:61] "kube-apiserver-newest-cni-400889" [bd2c7b07-69bf-43b7-ba7a-1002daf22666] Running
	I1013 22:10:42.170961  204091 system_pods.go:61] "kube-controller-manager-newest-cni-400889" [0f7464a5-ac8f-49fb-92cb-42bedd0068ce] Running
	I1013 22:10:42.170983  204091 system_pods.go:61] "kube-proxy-2c8dd" [e0608056-bfa9-46cf-a6c4-da63c05dc51a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1013 22:10:42.171022  204091 system_pods.go:61] "kube-scheduler-newest-cni-400889" [8d46c2a1-3b0a-4b30-8143-d2fa1d20f276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:10:42.171060  204091 system_pods.go:61] "storage-provisioner" [d60a2a57-2585-4721-aab0-cd73fa7bf7f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:10:42.171096  204091 system_pods.go:74] duration metric: took 6.647036ms to wait for pod list to return data ...
	I1013 22:10:42.171130  204091 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:10:42.179127  204091 default_sa.go:45] found service account: "default"
	I1013 22:10:42.179219  204091 default_sa.go:55] duration metric: took 8.066339ms for default service account to be created ...
	I1013 22:10:42.179255  204091 kubeadm.go:586] duration metric: took 1.062531688s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:10:42.179306  204091 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:10:42.185400  204091 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:10:42.185435  204091 node_conditions.go:123] node cpu capacity is 2
	I1013 22:10:42.185452  204091 node_conditions.go:105] duration metric: took 6.126244ms to run NodePressure ...
	I1013 22:10:42.185470  204091 start.go:241] waiting for startup goroutines ...
	I1013 22:10:42.186898  204091 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:10:42.190091  204091 addons.go:514] duration metric: took 1.072960937s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:10:42.359762  204091 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-400889" context rescaled to 1 replicas
	I1013 22:10:42.359821  204091 start.go:246] waiting for cluster config update ...
	I1013 22:10:42.359855  204091 start.go:255] writing updated cluster config ...
	I1013 22:10:42.360191  204091 ssh_runner.go:195] Run: rm -f paused
	I1013 22:10:42.429266  204091 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:10:42.432630  204091 out.go:179] * Done! kubectl is now configured to use "newest-cni-400889" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.068561656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.072386273Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f4f26fe0-750f-4948-a6db-f6374d299fb4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.077609221Z" level=info msg="Ran pod sandbox 20ea41ec6ab03b388a3b7181e6787decb63fed10081d28368c522c0963b200e1 with infra container: kube-system/kindnet-k8zlc/POD" id=f4f26fe0-750f-4948-a6db-f6374d299fb4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.082095893Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0abee551-8f3c-4c85-94a1-fe2a650baeec name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.08668986Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4af848b3-3bcd-4569-9f65-86fa9a2ffe17 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.096689656Z" level=info msg="Creating container: kube-system/kindnet-k8zlc/kindnet-cni" id=7fa691b4-1a1e-46e1-911a-b725a15beaf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.097379911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.110645371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.114825884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.141643664Z" level=info msg="Created container 27fba5de65869662c13038ab8050ba3d26e9bccaea0c7b4653ef5446001df09d: kube-system/kindnet-k8zlc/kindnet-cni" id=7fa691b4-1a1e-46e1-911a-b725a15beaf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.143105051Z" level=info msg="Starting container: 27fba5de65869662c13038ab8050ba3d26e9bccaea0c7b4653ef5446001df09d" id=2412123d-b36c-4c5a-a2d7-d4b9b18fe9ac name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.148080629Z" level=info msg="Started container" PID=1511 containerID=27fba5de65869662c13038ab8050ba3d26e9bccaea0c7b4653ef5446001df09d description=kube-system/kindnet-k8zlc/kindnet-cni id=2412123d-b36c-4c5a-a2d7-d4b9b18fe9ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=20ea41ec6ab03b388a3b7181e6787decb63fed10081d28368c522c0963b200e1
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.663957994Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2c8dd/POD" id=e2968470-8917-48f2-962d-c90949a21908 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.664024453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.669812015Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e2968470-8917-48f2-962d-c90949a21908 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.679670883Z" level=info msg="Ran pod sandbox e9588c0a569ef8c9a1188c84b9006a6c38c9367ca6174517246f7da2275448f4 with infra container: kube-system/kube-proxy-2c8dd/POD" id=e2968470-8917-48f2-962d-c90949a21908 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.681331755Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cfa19d4f-ed3c-4254-80be-30ed2324cc0a name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.684847055Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=23a39099-2c9a-4d2c-adeb-85eae84f1d4d name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.691522807Z" level=info msg="Creating container: kube-system/kube-proxy-2c8dd/kube-proxy" id=35e90671-a18b-4abc-b559-ad33b120198d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.691832723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.69819993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.698782208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.732111552Z" level=info msg="Created container 8386fa9c46f65c7830e2642d00769c2f784a5113838061af5e4559d0f254a217: kube-system/kube-proxy-2c8dd/kube-proxy" id=35e90671-a18b-4abc-b559-ad33b120198d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.736643056Z" level=info msg="Starting container: 8386fa9c46f65c7830e2642d00769c2f784a5113838061af5e4559d0f254a217" id=bf30b95a-e0a7-40ee-a25e-7b653f329a70 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:10:42 newest-cni-400889 crio[837]: time="2025-10-13T22:10:42.748315924Z" level=info msg="Started container" PID=1586 containerID=8386fa9c46f65c7830e2642d00769c2f784a5113838061af5e4559d0f254a217 description=kube-system/kube-proxy-2c8dd/kube-proxy id=bf30b95a-e0a7-40ee-a25e-7b653f329a70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e9588c0a569ef8c9a1188c84b9006a6c38c9367ca6174517246f7da2275448f4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8386fa9c46f65       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   e9588c0a569ef       kube-proxy-2c8dd                            kube-system
	27fba5de65869       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   20ea41ec6ab03       kindnet-k8zlc                               kube-system
	6f464b072a36f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      0                   c567e978132f8       etcd-newest-cni-400889                      kube-system
	e3256d1590571       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            0                   9782dcc7c5dd6       kube-apiserver-newest-cni-400889            kube-system
	b918b56efb38c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   0                   1f37a566067fc       kube-controller-manager-newest-cni-400889   kube-system
	460086c66c888       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            0                   0256ce2023025       kube-scheduler-newest-cni-400889            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-400889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-400889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=newest-cni-400889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_10_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:10:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-400889
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:10:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:10:36 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:10:36 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:10:36 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 22:10:36 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-400889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 063d7411736a4af8847340ff7b059438
	  System UUID:                081b52d5-83f5-4259-9831-31b23d524c2c
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-400889                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-k8zlc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-400889             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-400889    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-2c8dd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-400889             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 0s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-400889 event: Registered Node newest-cni-400889 in Controller
	
	
	==> dmesg <==
	[Oct13 21:41] overlayfs: idmapped layers are currently not supported
	[Oct13 21:42] overlayfs: idmapped layers are currently not supported
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	[Oct13 22:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6f464b072a36f145f9da692f914e3c18ecd23879121eb1bc541c81b85045d7aa] <==
	{"level":"warn","ts":"2025-10-13T22:10:31.713595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.730969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.746870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.762563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.780036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.799497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.813334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.827972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.875566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.877212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.892730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.907376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.923466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.940581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.960910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.976559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:31.993126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.011304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.031633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.047120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.072529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.098502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.115910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.128851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:32.201144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42872","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:10:43 up  1:52,  0 user,  load average: 3.95, 3.14, 2.41
	Linux newest-cni-400889 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [27fba5de65869662c13038ab8050ba3d26e9bccaea0c7b4653ef5446001df09d] <==
	I1013 22:10:42.314242       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:10:42.404050       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:10:42.404233       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:10:42.404303       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:10:42.404345       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:10:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:10:42.604636       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:10:42.604653       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:10:42.604661       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:10:42.605328       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e3256d15905714c4988dbf90a0587fe70061b9fb109cc4fac434953781b1e324] <==
	I1013 22:10:33.057951       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:10:33.058150       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1013 22:10:33.069756       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1013 22:10:33.071554       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:10:33.073910       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:10:33.079287       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1013 22:10:33.086473       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1013 22:10:33.278503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:10:33.749768       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1013 22:10:33.754838       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1013 22:10:33.754917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:10:34.830666       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:10:34.910756       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:10:35.033139       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1013 22:10:35.054772       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1013 22:10:35.058122       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:10:35.063940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:10:35.071073       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:10:35.980330       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:10:36.003036       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1013 22:10:36.027064       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1013 22:10:40.803125       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1013 22:10:40.977395       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:10:40.982535       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:10:41.242223       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b918b56efb38c33af108254fba53db007adf7efdfa970674b73505166fec34f5] <==
	I1013 22:10:40.053210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:10:40.053223       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1013 22:10:40.053232       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:10:40.053262       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:10:40.053289       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:10:40.062777       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1013 22:10:40.063509       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:10:40.063671       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:10:40.068992       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-400889"
	I1013 22:10:40.069095       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1013 22:10:40.068878       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:10:40.069709       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-400889" podCIDRs=["10.42.0.0/24"]
	I1013 22:10:40.069876       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:10:40.073049       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:10:40.073159       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:10:40.073322       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:10:40.076825       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:10:40.086539       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1013 22:10:40.087761       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:10:40.096067       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1013 22:10:40.097480       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:10:40.097759       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1013 22:10:40.097826       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:10:40.098745       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:10:40.098841       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	
	
	==> kube-proxy [8386fa9c46f65c7830e2642d00769c2f784a5113838061af5e4559d0f254a217] <==
	I1013 22:10:42.816235       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:10:42.893574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:10:42.994125       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:10:42.994170       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 22:10:42.994275       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:10:43.025621       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:10:43.025675       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:10:43.029845       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:10:43.030479       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:10:43.030501       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:10:43.040800       1 config.go:200] "Starting service config controller"
	I1013 22:10:43.040817       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:10:43.040844       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:10:43.040849       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:10:43.040858       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:10:43.040861       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:10:43.041604       1 config.go:309] "Starting node config controller"
	I1013 22:10:43.041612       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:10:43.041618       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:10:43.140986       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:10:43.140998       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:10:43.141022       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [460086c66c888df66c4f6bb5ce876c30b9dc8adf0fdfc0b423bf2418a82964b2] <==
	E1013 22:10:33.050349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:10:33.050409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:10:33.050468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1013 22:10:33.050531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1013 22:10:33.050586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:10:33.050647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1013 22:10:33.050813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:10:33.050928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:10:33.050998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1013 22:10:33.862119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1013 22:10:33.869330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1013 22:10:33.899062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1013 22:10:33.906499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1013 22:10:33.910130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1013 22:10:34.008322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1013 22:10:34.060277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1013 22:10:34.157174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1013 22:10:34.187511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1013 22:10:34.193723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1013 22:10:34.236262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1013 22:10:34.238435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1013 22:10:34.253822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1013 22:10:34.295775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1013 22:10:34.332101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1013 22:10:36.211154       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:10:37 newest-cni-400889 kubelet[1332]: I1013 22:10:37.290634    1332 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-400889"
	Oct 13 22:10:37 newest-cni-400889 kubelet[1332]: E1013 22:10:37.306827    1332 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-400889\" already exists" pod="kube-system/kube-scheduler-newest-cni-400889"
	Oct 13 22:10:37 newest-cni-400889 kubelet[1332]: I1013 22:10:37.377154    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-400889" podStartSLOduration=1.377133727 podStartE2EDuration="1.377133727s" podCreationTimestamp="2025-10-13 22:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:10:37.352166189 +0000 UTC m=+1.475953467" watchObservedRunningTime="2025-10-13 22:10:37.377133727 +0000 UTC m=+1.500921005"
	Oct 13 22:10:37 newest-cni-400889 kubelet[1332]: I1013 22:10:37.401403    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-400889" podStartSLOduration=1.401383294 podStartE2EDuration="1.401383294s" podCreationTimestamp="2025-10-13 22:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:10:37.378076998 +0000 UTC m=+1.501864276" watchObservedRunningTime="2025-10-13 22:10:37.401383294 +0000 UTC m=+1.525170564"
	Oct 13 22:10:37 newest-cni-400889 kubelet[1332]: I1013 22:10:37.456288    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-400889" podStartSLOduration=1.4562564359999999 podStartE2EDuration="1.456256436s" podCreationTimestamp="2025-10-13 22:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:10:37.40502028 +0000 UTC m=+1.528807550" watchObservedRunningTime="2025-10-13 22:10:37.456256436 +0000 UTC m=+1.580043722"
	Oct 13 22:10:37 newest-cni-400889 kubelet[1332]: I1013 22:10:37.456417    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-400889" podStartSLOduration=1.456403156 podStartE2EDuration="1.456403156s" podCreationTimestamp="2025-10-13 22:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:10:37.447334611 +0000 UTC m=+1.571121881" watchObservedRunningTime="2025-10-13 22:10:37.456403156 +0000 UTC m=+1.580190434"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.105175    1332 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.106485    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: E1013 22:10:40.864743    1332 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-2c8dd\" is forbidden: User \"system:node:newest-cni-400889\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-400889' and this object" podUID="e0608056-bfa9-46cf-a6c4-da63c05dc51a" pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: E1013 22:10:40.865023    1332 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-400889\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-400889' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: E1013 22:10:40.865073    1332 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-400889\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-400889' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.895273    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0608056-bfa9-46cf-a6c4-da63c05dc51a-kube-proxy\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.895545    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0608056-bfa9-46cf-a6c4-da63c05dc51a-xtables-lock\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.895674    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0608056-bfa9-46cf-a6c4-da63c05dc51a-lib-modules\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.895821    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wxh6\" (UniqueName: \"kubernetes.io/projected/e0608056-bfa9-46cf-a6c4-da63c05dc51a-kube-api-access-5wxh6\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.895957    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-cni-cfg\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.896071    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-lib-modules\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.896178    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfcmn\" (UniqueName: \"kubernetes.io/projected/bce90592-0127-4946-bc83-a6b06490dcc1-kube-api-access-qfcmn\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:10:40 newest-cni-400889 kubelet[1332]: I1013 22:10:40.896282    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-xtables-lock\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:10:41 newest-cni-400889 kubelet[1332]: I1013 22:10:41.892759    1332 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 22:10:41 newest-cni-400889 kubelet[1332]: E1013 22:10:41.998870    1332 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 13 22:10:41 newest-cni-400889 kubelet[1332]: E1013 22:10:41.998980    1332 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e0608056-bfa9-46cf-a6c4-da63c05dc51a-kube-proxy podName:e0608056-bfa9-46cf-a6c4-da63c05dc51a nodeName:}" failed. No retries permitted until 2025-10-13 22:10:42.498954588 +0000 UTC m=+6.622741857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e0608056-bfa9-46cf-a6c4-da63c05dc51a-kube-proxy") pod "kube-proxy-2c8dd" (UID: "e0608056-bfa9-46cf-a6c4-da63c05dc51a") : failed to sync configmap cache: timed out waiting for the condition
	Oct 13 22:10:42 newest-cni-400889 kubelet[1332]: W1013 22:10:42.075528    1332 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/crio-20ea41ec6ab03b388a3b7181e6787decb63fed10081d28368c522c0963b200e1 WatchSource:0}: Error finding container 20ea41ec6ab03b388a3b7181e6787decb63fed10081d28368c522c0963b200e1: Status 404 returned error can't find the container with id 20ea41ec6ab03b388a3b7181e6787decb63fed10081d28368c522c0963b200e1
	Oct 13 22:10:42 newest-cni-400889 kubelet[1332]: W1013 22:10:42.679209    1332 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/crio-e9588c0a569ef8c9a1188c84b9006a6c38c9367ca6174517246f7da2275448f4 WatchSource:0}: Error finding container e9588c0a569ef8c9a1188c84b9006a6c38c9367ca6174517246f7da2275448f4: Status 404 returned error can't find the container with id e9588c0a569ef8c9a1188c84b9006a6c38c9367ca6174517246f7da2275448f4
	Oct 13 22:10:43 newest-cni-400889 kubelet[1332]: I1013 22:10:43.323541    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-k8zlc" podStartSLOduration=3.32350836 podStartE2EDuration="3.32350836s" podCreationTimestamp="2025-10-13 22:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 22:10:42.323871281 +0000 UTC m=+6.447658592" watchObservedRunningTime="2025-10-13 22:10:43.32350836 +0000 UTC m=+7.447295638"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-400889 -n newest-cni-400889
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-400889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-cc4wf storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner: exit status 1 (77.08376ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-cc4wf" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-400889 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-400889 --alsologtostderr -v=1: exit status 80 (2.674941279s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-400889 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:11:08.716906  211216 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:11:08.721226  211216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:08.721239  211216 out.go:374] Setting ErrFile to fd 2...
	I1013 22:11:08.721245  211216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:08.721536  211216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:11:08.721797  211216 out.go:368] Setting JSON to false
	I1013 22:11:08.721811  211216 mustload.go:65] Loading cluster: newest-cni-400889
	I1013 22:11:08.722210  211216 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:08.722665  211216 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:11:08.753293  211216 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:11:08.753594  211216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:11:08.855605  211216 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-13 22:11:08.841211782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:11:08.856289  211216 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-400889 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:11:08.861127  211216 out.go:179] * Pausing node newest-cni-400889 ... 
	I1013 22:11:08.864156  211216 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:11:08.864470  211216 ssh_runner.go:195] Run: systemctl --version
	I1013 22:11:08.864513  211216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:11:08.889448  211216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:11:09.041718  211216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:11:09.085728  211216 pause.go:52] kubelet running: true
	I1013 22:11:09.085815  211216 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:11:09.531216  211216 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:11:09.531307  211216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:11:09.729119  211216 cri.go:89] found id: "bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b"
	I1013 22:11:09.729162  211216 cri.go:89] found id: "2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016"
	I1013 22:11:09.729168  211216 cri.go:89] found id: "29059c40b00ad04ad44738293f0b2017c88e1b61ccaeaff02d2db844814fa5f1"
	I1013 22:11:09.729172  211216 cri.go:89] found id: "cbd66f7e4aa28f071adbbc9c82004ce2e6b5d7758657b18b8705a841c024a4f4"
	I1013 22:11:09.729176  211216 cri.go:89] found id: "d7f909f4526bb28a23b8c602214df3866cb2dc9cc28c820891839a829fc81270"
	I1013 22:11:09.729180  211216 cri.go:89] found id: "41f03a8f2cf4c63c92345cd9504c3c2a0150d9555a97a54de9a811e88e7eb0f6"
	I1013 22:11:09.729183  211216 cri.go:89] found id: ""
	I1013 22:11:09.729245  211216 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:11:09.766521  211216 retry.go:31] will retry after 296.978462ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:09Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:11:10.064689  211216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:11:10.116236  211216 pause.go:52] kubelet running: false
	I1013 22:11:10.116323  211216 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:11:10.467204  211216 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:11:10.467291  211216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:11:10.590352  211216 cri.go:89] found id: "bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b"
	I1013 22:11:10.590377  211216 cri.go:89] found id: "2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016"
	I1013 22:11:10.590383  211216 cri.go:89] found id: "29059c40b00ad04ad44738293f0b2017c88e1b61ccaeaff02d2db844814fa5f1"
	I1013 22:11:10.590387  211216 cri.go:89] found id: "cbd66f7e4aa28f071adbbc9c82004ce2e6b5d7758657b18b8705a841c024a4f4"
	I1013 22:11:10.590390  211216 cri.go:89] found id: "d7f909f4526bb28a23b8c602214df3866cb2dc9cc28c820891839a829fc81270"
	I1013 22:11:10.590394  211216 cri.go:89] found id: "41f03a8f2cf4c63c92345cd9504c3c2a0150d9555a97a54de9a811e88e7eb0f6"
	I1013 22:11:10.590397  211216 cri.go:89] found id: ""
	I1013 22:11:10.590444  211216 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:11:10.620970  211216 retry.go:31] will retry after 272.0032ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:10Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:11:10.893169  211216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:11:10.912676  211216 pause.go:52] kubelet running: false
	I1013 22:11:10.912750  211216 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:11:11.173780  211216 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:11:11.173870  211216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:11:11.264429  211216 cri.go:89] found id: "bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b"
	I1013 22:11:11.264460  211216 cri.go:89] found id: "2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016"
	I1013 22:11:11.264466  211216 cri.go:89] found id: "29059c40b00ad04ad44738293f0b2017c88e1b61ccaeaff02d2db844814fa5f1"
	I1013 22:11:11.264470  211216 cri.go:89] found id: "cbd66f7e4aa28f071adbbc9c82004ce2e6b5d7758657b18b8705a841c024a4f4"
	I1013 22:11:11.264473  211216 cri.go:89] found id: "d7f909f4526bb28a23b8c602214df3866cb2dc9cc28c820891839a829fc81270"
	I1013 22:11:11.264477  211216 cri.go:89] found id: "41f03a8f2cf4c63c92345cd9504c3c2a0150d9555a97a54de9a811e88e7eb0f6"
	I1013 22:11:11.264481  211216 cri.go:89] found id: ""
	I1013 22:11:11.264528  211216 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:11:11.279511  211216 out.go:203] 
	W1013 22:11:11.282515  211216 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:11:11.282537  211216 out.go:285] * 
	* 
	W1013 22:11:11.288334  211216 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:11:11.291198  211216 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-400889 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-400889
helpers_test.go:243: (dbg) docker inspect newest-cni-400889:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda",
	        "Created": "2025-10-13T22:10:05.991697046Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208054,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:10:46.460409905Z",
	            "FinishedAt": "2025-10-13T22:10:45.51934127Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/hostname",
	        "HostsPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/hosts",
	        "LogPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda-json.log",
	        "Name": "/newest-cni-400889",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-400889:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-400889",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda",
	                "LowerDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-400889",
	                "Source": "/var/lib/docker/volumes/newest-cni-400889/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-400889",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-400889",
	                "name.minikube.sigs.k8s.io": "newest-cni-400889",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e369104e511244431739c735dc726ef3b54e13c953cdf37a3e751c29cd7d98e2",
	            "SandboxKey": "/var/run/docker/netns/e369104e5112",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-400889": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:e2:c3:67:52:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d596263e55a2c1a0ad1158c1d748ddecdc9ebcca3cfd3b93c9472d82661a4237",
	                    "EndpointID": "e8618627f391bd15a3f4db4869a7108578189c258bbd38d5fb9d7d5ba1e1fc42",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-400889",
	                        "327a4b5bba33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889: exit status 2 (396.386085ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-400889 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-400889 logs -n 25: (1.165287361s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:10 UTC │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-007533 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p newest-cni-400889 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable dashboard -p newest-cni-400889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-007533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ image   │ newest-cni-400889 image list --format=json                                                                                                                                                                                                    │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ pause   │ -p newest-cni-400889 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:10:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:10:49.201763  208589 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:10:49.201894  208589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:49.201905  208589 out.go:374] Setting ErrFile to fd 2...
	I1013 22:10:49.201909  208589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:49.202206  208589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:10:49.202571  208589 out.go:368] Setting JSON to false
	I1013 22:10:49.203437  208589 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6784,"bootTime":1760386666,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:10:49.203501  208589 start.go:141] virtualization:  
	I1013 22:10:49.206276  208589 out.go:179] * [default-k8s-diff-port-007533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:10:49.209956  208589 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:10:49.210009  208589 notify.go:220] Checking for updates...
	I1013 22:10:49.216483  208589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:10:49.219307  208589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:49.222297  208589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:10:49.225184  208589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:10:49.228184  208589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:10:49.231388  208589 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:49.232052  208589 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:10:49.252685  208589 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:10:49.252796  208589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:49.314498  208589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:10:49.305250888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:49.314612  208589 docker.go:318] overlay module found
	I1013 22:10:49.317840  208589 out.go:179] * Using the docker driver based on existing profile
	I1013 22:10:49.320599  208589 start.go:305] selected driver: docker
	I1013 22:10:49.320620  208589 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:49.320726  208589 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:10:49.321418  208589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:49.375566  208589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:10:49.366993565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:49.375930  208589 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:10:49.375964  208589 cni.go:84] Creating CNI manager for ""
	I1013 22:10:49.376023  208589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:49.376062  208589 start.go:349] cluster config:
	{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:49.381184  208589 out.go:179] * Starting "default-k8s-diff-port-007533" primary control-plane node in "default-k8s-diff-port-007533" cluster
	I1013 22:10:49.383986  208589 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:10:49.386867  208589 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:10:49.389709  208589 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:49.389759  208589 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:10:49.389772  208589 cache.go:58] Caching tarball of preloaded images
	I1013 22:10:49.389809  208589 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:10:49.389858  208589 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:10:49.389869  208589 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:10:49.390003  208589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:10:49.409485  208589 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:10:49.409506  208589 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:10:49.409542  208589 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:10:49.409571  208589 start.go:360] acquireMachinesLock for default-k8s-diff-port-007533: {Name:mk990b5defb290df24f36fb536d48d3275652286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:10:49.409625  208589 start.go:364] duration metric: took 32.762µs to acquireMachinesLock for "default-k8s-diff-port-007533"
	I1013 22:10:49.409648  208589 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:10:49.409661  208589 fix.go:54] fixHost starting: 
	I1013 22:10:49.409903  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:49.426175  208589 fix.go:112] recreateIfNeeded on default-k8s-diff-port-007533: state=Stopped err=<nil>
	W1013 22:10:49.426204  208589 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:10:46.428979  207923 out.go:252] * Restarting existing docker container for "newest-cni-400889" ...
	I1013 22:10:46.429076  207923 cli_runner.go:164] Run: docker start newest-cni-400889
	I1013 22:10:46.685088  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:46.706065  207923 kic.go:430] container "newest-cni-400889" state is running.
	I1013 22:10:46.708068  207923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:46.734974  207923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json ...
	I1013 22:10:46.735197  207923 machine.go:93] provisionDockerMachine start ...
	I1013 22:10:46.735429  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:46.759945  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:46.760252  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:46.760262  207923 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:10:46.761035  207923 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:10:49.923873  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:49.923959  207923 ubuntu.go:182] provisioning hostname "newest-cni-400889"
	I1013 22:10:49.924052  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:49.948327  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:49.948686  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:49.948728  207923 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-400889 && echo "newest-cni-400889" | sudo tee /etc/hostname
	I1013 22:10:50.147332  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:50.147475  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:50.178758  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:50.179075  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:50.179094  207923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-400889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-400889/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-400889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:10:50.332142  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:10:50.332170  207923 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:10:50.332193  207923 ubuntu.go:190] setting up certificates
	I1013 22:10:50.332205  207923 provision.go:84] configureAuth start
	I1013 22:10:50.332264  207923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:50.349907  207923 provision.go:143] copyHostCerts
	I1013 22:10:50.349970  207923 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:10:50.349996  207923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:10:50.350077  207923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:10:50.350177  207923 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:10:50.350187  207923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:10:50.350219  207923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:10:50.350273  207923 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:10:50.350284  207923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:10:50.350308  207923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:10:50.350355  207923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.newest-cni-400889 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-400889]
	I1013 22:10:51.819381  207923 provision.go:177] copyRemoteCerts
	I1013 22:10:51.819472  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:10:51.819520  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:51.837391  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:51.941181  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:10:51.957921  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:10:51.975247  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:10:51.992135  207923 provision.go:87] duration metric: took 1.659906869s to configureAuth
	I1013 22:10:51.992162  207923 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:10:51.992386  207923 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:51.992505  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.012670  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:52.013000  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:52.013021  207923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:10:52.307539  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:10:52.307561  207923 machine.go:96] duration metric: took 5.57235436s to provisionDockerMachine
	I1013 22:10:52.307571  207923 start.go:293] postStartSetup for "newest-cni-400889" (driver="docker")
	I1013 22:10:52.307583  207923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:10:52.307657  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:10:52.307695  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.324260  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.427651  207923 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:10:52.431030  207923 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:10:52.431059  207923 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:10:52.431070  207923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:10:52.431125  207923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:10:52.431210  207923 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:10:52.431318  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:10:52.439129  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:52.456960  207923 start.go:296] duration metric: took 149.373081ms for postStartSetup
	I1013 22:10:52.457054  207923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:10:52.457109  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.475070  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.572733  207923 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:10:52.577441  207923 fix.go:56] duration metric: took 6.167799717s for fixHost
	I1013 22:10:52.577466  207923 start.go:83] releasing machines lock for "newest-cni-400889", held for 6.167850661s
	I1013 22:10:52.577539  207923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:52.594674  207923 ssh_runner.go:195] Run: cat /version.json
	I1013 22:10:52.594703  207923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:10:52.594729  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.594760  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.615708  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.616079  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.715338  207923 ssh_runner.go:195] Run: systemctl --version
	I1013 22:10:52.722178  207923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:10:52.829759  207923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:10:52.834454  207923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:10:52.834524  207923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:10:52.842950  207923 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:10:52.842974  207923 start.go:495] detecting cgroup driver to use...
	I1013 22:10:52.843006  207923 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:10:52.843061  207923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:10:52.858462  207923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:10:52.872048  207923 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:10:52.872164  207923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:10:52.888280  207923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:10:52.901611  207923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:10:53.049664  207923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:10:53.187507  207923 docker.go:234] disabling docker service ...
	I1013 22:10:53.187582  207923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:10:53.206596  207923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:10:53.223551  207923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:10:53.360492  207923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:10:53.505968  207923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:10:53.520660  207923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:10:53.541512  207923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:10:53.541583  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.554551  207923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:10:53.554617  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.566727  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.579652  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.590191  207923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:10:53.603558  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.612209  207923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.620009  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.627983  207923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:10:53.635011  207923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:10:53.642008  207923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:53.777214  207923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:10:53.945194  207923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:10:53.945273  207923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:10:53.950847  207923 start.go:563] Will wait 60s for crictl version
	I1013 22:10:53.950906  207923 ssh_runner.go:195] Run: which crictl
	I1013 22:10:53.958153  207923 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:10:54.008311  207923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:10:54.008414  207923 ssh_runner.go:195] Run: crio --version
	I1013 22:10:54.045586  207923 ssh_runner.go:195] Run: crio --version
	I1013 22:10:54.094907  207923 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:10:54.097629  207923 cli_runner.go:164] Run: docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:54.119697  207923 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:10:54.123443  207923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:54.135744  207923 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 22:10:49.429346  208589 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-007533" ...
	I1013 22:10:49.429439  208589 cli_runner.go:164] Run: docker start default-k8s-diff-port-007533
	I1013 22:10:49.681999  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:49.704933  208589 kic.go:430] container "default-k8s-diff-port-007533" state is running.
	I1013 22:10:49.705319  208589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:10:49.724140  208589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:10:49.724369  208589 machine.go:93] provisionDockerMachine start ...
	I1013 22:10:49.724445  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:49.745661  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:49.745975  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:49.745994  208589 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:10:49.747465  208589 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:10:52.907381  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:10:52.907414  208589 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-007533"
	I1013 22:10:52.907480  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:52.929300  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:52.929610  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:52.929627  208589 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-007533 && echo "default-k8s-diff-port-007533" | sudo tee /etc/hostname
	I1013 22:10:53.119423  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:10:53.119579  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:53.145508  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:53.145823  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:53.145841  208589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-007533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-007533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-007533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:10:53.295958  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:10:53.295979  208589 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:10:53.295997  208589 ubuntu.go:190] setting up certificates
	I1013 22:10:53.296007  208589 provision.go:84] configureAuth start
	I1013 22:10:53.296077  208589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:10:53.313586  208589 provision.go:143] copyHostCerts
	I1013 22:10:53.313646  208589 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:10:53.313662  208589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:10:53.313733  208589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:10:53.313831  208589 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:10:53.313836  208589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:10:53.313861  208589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:10:53.313924  208589 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:10:53.313928  208589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:10:53.313951  208589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:10:53.314003  208589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-007533 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-007533 localhost minikube]
	I1013 22:10:54.039583  208589 provision.go:177] copyRemoteCerts
	I1013 22:10:54.039697  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:10:54.039787  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.061098  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:54.167740  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:10:54.187005  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 22:10:54.138411  207923 kubeadm.go:883] updating cluster {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:10:54.138544  207923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:54.138623  207923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:54.179026  207923 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:54.179046  207923 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:10:54.179103  207923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:54.214312  207923 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:54.214389  207923 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:10:54.214412  207923 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:10:54.214549  207923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-400889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:10:54.214638  207923 ssh_runner.go:195] Run: crio config
	I1013 22:10:54.314857  207923 cni.go:84] Creating CNI manager for ""
	I1013 22:10:54.314880  207923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:54.314903  207923 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 22:10:54.314931  207923 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-400889 NodeName:newest-cni-400889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:10:54.315053  207923 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-400889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:10:54.315119  207923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:10:54.327079  207923 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:10:54.327152  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:10:54.335281  207923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:10:54.349772  207923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:10:54.366006  207923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 22:10:54.382687  207923 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:10:54.386051  207923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:54.395105  207923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:54.535334  207923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:54.554994  207923 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889 for IP: 192.168.85.2
	I1013 22:10:54.555017  207923 certs.go:195] generating shared ca certs ...
	I1013 22:10:54.555035  207923 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:54.555173  207923 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:10:54.555235  207923 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:10:54.555245  207923 certs.go:257] generating profile certs ...
	I1013 22:10:54.555327  207923 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key
	I1013 22:10:54.555393  207923 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4
	I1013 22:10:54.555434  207923 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key
	I1013 22:10:54.555552  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:10:54.555587  207923 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:10:54.555599  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:10:54.555624  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:10:54.555651  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:10:54.555683  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:10:54.555730  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:54.556403  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:10:54.597734  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:10:54.664803  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:10:54.697571  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:10:54.719822  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:10:54.750988  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:10:54.776949  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:10:54.810269  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:10:54.855660  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:10:54.878583  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:10:54.904223  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:10:54.926069  207923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:10:54.942267  207923 ssh_runner.go:195] Run: openssl version
	I1013 22:10:54.949276  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:10:54.957562  207923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:10:54.961480  207923 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:10:54.961541  207923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:10:55.010279  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:10:55.021300  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:10:55.032220  207923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:10:55.038656  207923 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:10:55.038732  207923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:10:55.096259  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:10:55.110031  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:10:55.140295  207923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:55.157135  207923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:55.157219  207923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:55.216659  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:10:55.226292  207923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:10:55.231893  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:10:55.304205  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:10:55.389692  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:10:55.487661  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:10:55.556647  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:10:55.753356  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:10:55.974260  207923 kubeadm.go:400] StartCluster: {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:55.974346  207923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:10:55.974416  207923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:10:56.045465  207923 cri.go:89] found id: "29059c40b00ad04ad44738293f0b2017c88e1b61ccaeaff02d2db844814fa5f1"
	I1013 22:10:56.045494  207923 cri.go:89] found id: "cbd66f7e4aa28f071adbbc9c82004ce2e6b5d7758657b18b8705a841c024a4f4"
	I1013 22:10:56.045499  207923 cri.go:89] found id: "d7f909f4526bb28a23b8c602214df3866cb2dc9cc28c820891839a829fc81270"
	I1013 22:10:56.045503  207923 cri.go:89] found id: "41f03a8f2cf4c63c92345cd9504c3c2a0150d9555a97a54de9a811e88e7eb0f6"
	I1013 22:10:56.045507  207923 cri.go:89] found id: ""
	I1013 22:10:56.045565  207923 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:10:56.082973  207923 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:10:56Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:10:56.083086  207923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:10:56.101552  207923 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:10:56.101581  207923 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:10:56.101644  207923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:10:56.118379  207923 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:10:56.118932  207923 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-400889" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:56.119066  207923 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-2495/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-400889" cluster setting kubeconfig missing "newest-cni-400889" context setting]
	I1013 22:10:56.119407  207923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:56.121206  207923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:10:56.134016  207923 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 22:10:56.134048  207923 kubeadm.go:601] duration metric: took 32.460427ms to restartPrimaryControlPlane
	I1013 22:10:56.134069  207923 kubeadm.go:402] duration metric: took 159.818435ms to StartCluster
	I1013 22:10:56.134088  207923 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:56.134173  207923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:56.135009  207923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:56.135318  207923 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:10:56.135931  207923 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:10:56.136051  207923 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-400889"
	I1013 22:10:56.136065  207923 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-400889"
	W1013 22:10:56.136071  207923 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:10:56.136095  207923 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:56.136684  207923 addons.go:69] Setting dashboard=true in profile "newest-cni-400889"
	I1013 22:10:56.136715  207923 addons.go:238] Setting addon dashboard=true in "newest-cni-400889"
	W1013 22:10:56.136726  207923 addons.go:247] addon dashboard should already be in state true
	I1013 22:10:56.136764  207923 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:56.136816  207923 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:56.137227  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.137294  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.138894  207923 addons.go:69] Setting default-storageclass=true in profile "newest-cni-400889"
	I1013 22:10:56.138916  207923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-400889"
	I1013 22:10:56.139476  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.147843  207923 out.go:179] * Verifying Kubernetes components...
	I1013 22:10:56.151581  207923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:56.180697  207923 addons.go:238] Setting addon default-storageclass=true in "newest-cni-400889"
	W1013 22:10:56.180721  207923 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:10:56.180744  207923 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:56.181167  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.216080  207923 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 22:10:56.219169  207923 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 22:10:56.219348  207923 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:10:54.210901  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:10:54.231140  208589 provision.go:87] duration metric: took 935.11087ms to configureAuth
	I1013 22:10:54.231163  208589 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:10:54.231346  208589 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:54.231438  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.249260  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:54.249566  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:54.249581  208589 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:10:54.637039  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:10:54.637100  208589 machine.go:96] duration metric: took 4.91271327s to provisionDockerMachine
	I1013 22:10:54.637142  208589 start.go:293] postStartSetup for "default-k8s-diff-port-007533" (driver="docker")
	I1013 22:10:54.637179  208589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:10:54.637268  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:10:54.637331  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.663897  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:54.794380  208589 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:10:54.798094  208589 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:10:54.798119  208589 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:10:54.798130  208589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:10:54.798182  208589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:10:54.798267  208589 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:10:54.798368  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:10:54.808936  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:54.836214  208589 start.go:296] duration metric: took 199.045534ms for postStartSetup
	I1013 22:10:54.836333  208589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:10:54.836421  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.857310  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:54.969272  208589 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:10:54.974468  208589 fix.go:56] duration metric: took 5.564804544s for fixHost
	I1013 22:10:54.974497  208589 start.go:83] releasing machines lock for "default-k8s-diff-port-007533", held for 5.5648606s
	I1013 22:10:54.974577  208589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:10:54.997424  208589 ssh_runner.go:195] Run: cat /version.json
	I1013 22:10:54.997479  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.997712  208589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:10:54.997771  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:55.045183  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:55.059996  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:55.176566  208589 ssh_runner.go:195] Run: systemctl --version
	I1013 22:10:55.292221  208589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:10:55.343619  208589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:10:55.349850  208589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:10:55.349931  208589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:10:55.358116  208589 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:10:55.358139  208589 start.go:495] detecting cgroup driver to use...
	I1013 22:10:55.358171  208589 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:10:55.358238  208589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:10:55.374322  208589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:10:55.392310  208589 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:10:55.392413  208589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:10:55.420448  208589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:10:55.439988  208589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:10:55.611005  208589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:10:55.874196  208589 docker.go:234] disabling docker service ...
	I1013 22:10:55.874346  208589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:10:55.907645  208589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:10:55.930037  208589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:10:56.265364  208589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:10:56.568554  208589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:10:56.591948  208589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:10:56.613372  208589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:10:56.613433  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.639001  208589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:10:56.639072  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.664947  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.684552  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.700063  208589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:10:56.715563  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.727734  208589 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.746489  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.757597  208589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:10:56.773146  208589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:10:56.786877  208589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:57.005391  208589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:10:57.199574  208589 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:10:57.199719  208589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:10:57.213721  208589 start.go:563] Will wait 60s for crictl version
	I1013 22:10:57.213861  208589 ssh_runner.go:195] Run: which crictl
	I1013 22:10:57.218507  208589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:10:57.270598  208589 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:10:57.270688  208589 ssh_runner.go:195] Run: crio --version
	I1013 22:10:57.326670  208589 ssh_runner.go:195] Run: crio --version
	I1013 22:10:57.403454  208589 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:10:57.406297  208589 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:57.433691  208589 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:10:57.439894  208589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:57.451035  208589 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:10:57.451159  208589 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:57.451221  208589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:57.518190  208589 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:57.518274  208589 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:10:57.518371  208589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:57.586742  208589 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:57.586764  208589 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:10:57.586772  208589 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1013 22:10:57.586875  208589 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-007533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:10:57.586962  208589 ssh_runner.go:195] Run: crio config
	I1013 22:10:57.676611  208589 cni.go:84] Creating CNI manager for ""
	I1013 22:10:57.676635  208589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:57.676654  208589 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:10:57.676696  208589 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-007533 NodeName:default-k8s-diff-port-007533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:10:57.676853  208589 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-007533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:10:57.676957  208589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:10:57.689549  208589 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:10:57.689648  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:10:57.699818  208589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 22:10:57.714868  208589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:10:57.741666  208589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 22:10:57.765866  208589 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:10:57.770037  208589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:57.789122  208589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:58.027078  208589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:58.047739  208589 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533 for IP: 192.168.76.2
	I1013 22:10:58.047762  208589 certs.go:195] generating shared ca certs ...
	I1013 22:10:58.047804  208589 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:58.047968  208589 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:10:58.048033  208589 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:10:58.048054  208589 certs.go:257] generating profile certs ...
	I1013 22:10:58.048169  208589 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key
	I1013 22:10:58.048257  208589 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38
	I1013 22:10:58.048326  208589 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key
	I1013 22:10:58.048475  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:10:58.048531  208589 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:10:58.048547  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:10:58.048573  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:10:58.048634  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:10:58.048663  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:10:58.048729  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:58.049332  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:10:58.110912  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:10:58.162538  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:10:58.197602  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:10:58.245425  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 22:10:58.284987  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:10:58.338793  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:10:58.375748  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:10:58.395998  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:10:58.420941  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:10:58.453039  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:10:58.487334  208589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:10:58.505584  208589 ssh_runner.go:195] Run: openssl version
	I1013 22:10:58.516517  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:10:58.525613  208589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:10:58.532102  208589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:10:58.532208  208589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:10:58.606241  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:10:58.615375  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:10:58.633175  208589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:10:58.637865  208589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:10:58.637956  208589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:10:58.696712  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:10:58.705350  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:10:58.714979  208589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:58.719907  208589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:58.720008  208589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:58.812495  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:10:58.837043  208589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:10:58.853654  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:10:58.945608  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:10:59.030267  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:10:59.245322  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:10:59.375654  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:10:59.509656  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:10:59.619729  208589 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:59.619910  208589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:10:59.620017  208589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:10:59.759246  208589 cri.go:89] found id: "3970a5fddb4ed9fafd03a56430ff0a855693ce410e375d5e6f5b23115bdec4fe"
	I1013 22:10:59.759328  208589 cri.go:89] found id: "5bbc4021a2610d0a72615ef54a61b83477debc9e67e27338b6ebdad10f29a7bb"
	I1013 22:10:59.759348  208589 cri.go:89] found id: "bd56c0184294021b9044112e6391397dce68b76fd94d9c861cdd5ada9d399899"
	I1013 22:10:59.759364  208589 cri.go:89] found id: "99b9c491479a5e957e19a4c1ca9d1a62f9cde3467897c3b831fc01afd815b1f7"
	I1013 22:10:59.759381  208589 cri.go:89] found id: ""
	I1013 22:10:59.759474  208589 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:10:59.788952  208589 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:10:59Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:10:59.789075  208589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:10:59.819475  208589 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:10:59.819557  208589 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:10:59.819638  208589 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:10:59.844038  208589 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:10:59.844785  208589 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-007533" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:59.845176  208589 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-2495/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-007533" cluster setting kubeconfig missing "default-k8s-diff-port-007533" context setting]
	I1013 22:10:59.845982  208589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:59.848157  208589 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:10:59.864277  208589 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 22:10:59.864373  208589 kubeadm.go:601] duration metric: took 44.796703ms to restartPrimaryControlPlane
	I1013 22:10:59.864398  208589 kubeadm.go:402] duration metric: took 244.685898ms to StartCluster
	I1013 22:10:59.864452  208589 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:59.864569  208589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:59.865707  208589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:59.866031  208589 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:10:59.866537  208589 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:10:59.866620  208589 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-007533"
	I1013 22:10:59.866639  208589 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-007533"
	W1013 22:10:59.866646  208589 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:10:59.866671  208589 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:10:59.867309  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.867693  208589 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:59.867816  208589 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-007533"
	I1013 22:10:59.867853  208589 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-007533"
	W1013 22:10:59.867881  208589 addons.go:247] addon dashboard should already be in state true
	I1013 22:10:59.867927  208589 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:10:59.868452  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.872934  208589 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-007533"
	I1013 22:10:59.873199  208589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-007533"
	I1013 22:10:59.873149  208589 out.go:179] * Verifying Kubernetes components...
	I1013 22:10:59.880222  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.882022  208589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:59.925446  208589 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:10:59.928468  208589 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:59.928495  208589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:10:59.928567  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:59.931864  208589 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 22:10:59.939877  208589 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 22:10:56.222159  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:10:56.222188  207923 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:10:56.222292  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:56.227993  207923 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:56.228029  207923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:10:56.228114  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:56.265831  207923 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:10:56.265860  207923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:10:56.265911  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:56.289367  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:56.300468  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:56.329960  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:56.615938  207923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:56.657049  207923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:56.764657  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:10:56.764745  207923 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:10:56.794684  207923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:10:56.881178  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:10:56.881256  207923 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:10:56.980381  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:10:56.980474  207923 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:10:57.061027  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:10:57.061089  207923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:10:57.142140  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:10:57.142227  207923 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:10:57.188234  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:10:57.188320  207923 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:10:57.215282  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:10:57.215359  207923 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:10:57.245227  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:10:57.245327  207923 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:10:57.263250  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:10:57.263333  207923 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:10:57.304471  207923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:10:59.941301  208589 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-007533"
	W1013 22:10:59.941321  208589 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:10:59.941357  208589 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:10:59.941808  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.949621  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:10:59.949656  208589 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:10:59.949720  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:59.976042  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:59.990831  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:11:00.000668  208589 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:00.000689  208589 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:11:00.000771  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:11:00.094535  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:11:00.663807  208589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:11:00.693522  208589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:00.756986  208589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:11:00.788498  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:11:00.788571  208589 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:11:00.809014  208589 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:11:00.900877  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:11:00.900947  208589 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:11:00.964752  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:11:00.964821  208589 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:11:01.004434  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:11:01.004469  208589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:11:01.064598  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:11:01.064621  208589 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:11:01.144314  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:11:01.144341  208589 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:11:01.229062  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:11:01.229090  208589 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:11:01.324295  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:11:01.324322  208589 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:11:01.385269  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:11:01.385294  208589 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:11:01.422781  208589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:11:06.912137  207923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.296091554s)
	I1013 22:11:06.912195  207923 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.255071903s)
	I1013 22:11:06.912229  207923 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:11:06.912288  207923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:11:06.912360  207923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.117600723s)
	I1013 22:11:07.024930  207923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.720359653s)
	I1013 22:11:07.025175  207923 api_server.go:72] duration metric: took 10.889805157s to wait for apiserver process to appear ...
	I1013 22:11:07.025225  207923 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:11:07.025262  207923 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:11:07.028829  207923 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-400889 addons enable metrics-server
	
	I1013 22:11:07.031720  207923 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1013 22:11:07.034637  207923 addons.go:514] duration metric: took 10.898698839s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1013 22:11:07.055246  207923 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:11:07.055284  207923 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:11:07.525720  207923 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:11:07.546066  207923 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:11:07.547446  207923 api_server.go:141] control plane version: v1.34.1
	I1013 22:11:07.547480  207923 api_server.go:131] duration metric: took 522.23014ms to wait for apiserver health ...
	I1013 22:11:07.547489  207923 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:11:07.562338  207923 system_pods.go:59] 8 kube-system pods found
	I1013 22:11:07.562378  207923 system_pods.go:61] "coredns-66bc5c9577-cc4wf" [0bf2694d-f251-4b5b-86fc-6dfc45fe88c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:11:07.562409  207923 system_pods.go:61] "etcd-newest-cni-400889" [67dc0b91-0ac5-4923-a944-5f2dd99ad833] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:11:07.562422  207923 system_pods.go:61] "kindnet-k8zlc" [bce90592-0127-4946-bc83-a6b06490dcc1] Running
	I1013 22:11:07.562450  207923 system_pods.go:61] "kube-apiserver-newest-cni-400889" [bd2c7b07-69bf-43b7-ba7a-1002daf22666] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:11:07.562461  207923 system_pods.go:61] "kube-controller-manager-newest-cni-400889" [0f7464a5-ac8f-49fb-92cb-42bedd0068ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:11:07.562467  207923 system_pods.go:61] "kube-proxy-2c8dd" [e0608056-bfa9-46cf-a6c4-da63c05dc51a] Running
	I1013 22:11:07.562502  207923 system_pods.go:61] "kube-scheduler-newest-cni-400889" [8d46c2a1-3b0a-4b30-8143-d2fa1d20f276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:11:07.562515  207923 system_pods.go:61] "storage-provisioner" [d60a2a57-2585-4721-aab0-cd73fa7bf7f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:11:07.562522  207923 system_pods.go:74] duration metric: took 15.011146ms to wait for pod list to return data ...
	I1013 22:11:07.562534  207923 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:11:07.570171  207923 default_sa.go:45] found service account: "default"
	I1013 22:11:07.570198  207923 default_sa.go:55] duration metric: took 7.657742ms for default service account to be created ...
	I1013 22:11:07.570212  207923 kubeadm.go:586] duration metric: took 11.434842265s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:11:07.570252  207923 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:11:07.574141  207923 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:11:07.574175  207923 node_conditions.go:123] node cpu capacity is 2
	I1013 22:11:07.574187  207923 node_conditions.go:105] duration metric: took 3.930001ms to run NodePressure ...
	I1013 22:11:07.574199  207923 start.go:241] waiting for startup goroutines ...
	I1013 22:11:07.574239  207923 start.go:246] waiting for cluster config update ...
	I1013 22:11:07.574251  207923 start.go:255] writing updated cluster config ...
	I1013 22:11:07.574554  207923 ssh_runner.go:195] Run: rm -f paused
	I1013 22:11:07.686699  207923 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:11:07.691738  207923 out.go:179] * Done! kubectl is now configured to use "newest-cni-400889" cluster and "default" namespace by default
	I1013 22:11:07.424856  208589 node_ready.go:49] node "default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:07.424884  208589 node_ready.go:38] duration metric: took 6.615803519s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:11:07.424897  208589 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:11:07.424952  208589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:11:08.442320  208589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.748766949s)
	I1013 22:11:11.093609  208589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.336591313s)
	I1013 22:11:11.163108  208589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.740286327s)
	I1013 22:11:11.163270  208589 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.738300589s)
	I1013 22:11:11.163284  208589 api_server.go:72] duration metric: took 11.297201991s to wait for apiserver process to appear ...
	I1013 22:11:11.163290  208589 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:11:11.163308  208589 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1013 22:11:11.166178  208589 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-007533 addons enable metrics-server
	
	I1013 22:11:11.169066  208589 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	
	
	==> CRI-O <==
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.407240421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.414828174Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2c8dd/POD" id=d1562e3a-c338-4cfb-ad18-7859d96b9586 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.414897071Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.429884916Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ad17b549-9ba9-41e0-a7b3-0d62af7e86dd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.460877407Z" level=info msg="Ran pod sandbox b8a215e93d9a8896042575b36024873fefdf435d6f98ee1d429208396ba864ac with infra container: kube-system/kindnet-k8zlc/POD" id=ad17b549-9ba9-41e0-a7b3-0d62af7e86dd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.486594177Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3850c13d-efa3-43d1-baf7-09eb021c8b56 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.487683298Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1a257365-972a-4143-8d2a-e97d1a7568c3 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.468094914Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d1562e3a-c338-4cfb-ad18-7859d96b9586 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.512895941Z" level=info msg="Creating container: kube-system/kindnet-k8zlc/kindnet-cni" id=791caa31-476b-4c06-a3b4-a9940b85bda6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.51333323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.534734491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.541216451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.608454651Z" level=info msg="Ran pod sandbox aa5a0d3a5b2ccaff2b3bf9f9081ec522b8c9333e4be30519389ae0765e01ee51 with infra container: kube-system/kube-proxy-2c8dd/POD" id=d1562e3a-c338-4cfb-ad18-7859d96b9586 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.628403502Z" level=info msg="Created container 2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016: kube-system/kindnet-k8zlc/kindnet-cni" id=791caa31-476b-4c06-a3b4-a9940b85bda6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.640658977Z" level=info msg="Starting container: 2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016" id=83c28aef-f74f-48b1-9b17-4e2c82902015 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.652662848Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d1f59d5f-010f-4e86-9095-7db86e9dbb33 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.65478601Z" level=info msg="Started container" PID=1058 containerID=2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016 description=kube-system/kindnet-k8zlc/kindnet-cni id=83c28aef-f74f-48b1-9b17-4e2c82902015 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8a215e93d9a8896042575b36024873fefdf435d6f98ee1d429208396ba864ac
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.688158725Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8a931a27-abb2-4eb2-8454-06a908182226 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.702583761Z" level=info msg="Creating container: kube-system/kube-proxy-2c8dd/kube-proxy" id=47e7e6d4-3d62-433f-95d1-655b0cca27af name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.702864369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.726124504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.744996016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.880910967Z" level=info msg="Created container bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b: kube-system/kube-proxy-2c8dd/kube-proxy" id=47e7e6d4-3d62-433f-95d1-655b0cca27af name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.881704375Z" level=info msg="Starting container: bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b" id=6ecac661-e773-45f4-b932-99bbd14026bc name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.8903956Z" level=info msg="Started container" PID=1068 containerID=bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b description=kube-system/kube-proxy-2c8dd/kube-proxy id=6ecac661-e773-45f4-b932-99bbd14026bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa5a0d3a5b2ccaff2b3bf9f9081ec522b8c9333e4be30519389ae0765e01ee51
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bec5eebd1eb82       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   aa5a0d3a5b2cc       kube-proxy-2c8dd                            kube-system
	2ce0da1c78d96       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   b8a215e93d9a8       kindnet-k8zlc                               kube-system
	29059c40b00ad       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago      Running             kube-apiserver            1                   7f11434468299       kube-apiserver-newest-cni-400889            kube-system
	cbd66f7e4aa28       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago      Running             kube-controller-manager   1                   91c8509d5061e       kube-controller-manager-newest-cni-400889   kube-system
	d7f909f4526bb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago      Running             kube-scheduler            1                   4da440eaab414       kube-scheduler-newest-cni-400889            kube-system
	41f03a8f2cf4c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago      Running             etcd                      1                   acb0a3a944b50       etcd-newest-cni-400889                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-400889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-400889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=newest-cni-400889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_10_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:10:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-400889
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:11:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-400889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9d5f35dbbd74029b922b46f57d6faf8
	  System UUID:                081b52d5-83f5-4259-9831-31b23d524c2c
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-400889                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-k8zlc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-400889             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-400889    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-2c8dd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-400889             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     36s                kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           32s                node-controller  Node newest-cni-400889 event: Registered Node newest-cni-400889 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x8 over 18s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-400889 event: Registered Node newest-cni-400889 in Controller
	
	
	==> dmesg <==
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	[Oct13 22:10] overlayfs: idmapped layers are currently not supported
	[ +26.243538] overlayfs: idmapped layers are currently not supported
	[  +3.497977] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [41f03a8f2cf4c63c92345cd9504c3c2a0150d9555a97a54de9a811e88e7eb0f6] <==
	{"level":"warn","ts":"2025-10-13T22:10:59.773017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:59.840483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:59.932053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.116101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.400005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.453685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.516426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.596831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.650105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.707445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.746155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.766679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.809920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.848040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.908978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:59746: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-13T22:11:00.940730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.985040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.020811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.057317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.080026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.115599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.136759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.166315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.206044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.437703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59908","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:11:12 up  1:53,  0 user,  load average: 6.06, 3.72, 2.63
	Linux newest-cni-400889 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016] <==
	I1013 22:11:05.880151       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:11:05.893198       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:11:05.893300       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:11:05.893312       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:11:05.893326       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:11:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:11:06.141048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:11:06.141277       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:11:06.141294       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:11:06.141508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [29059c40b00ad04ad44738293f0b2017c88e1b61ccaeaff02d2db844814fa5f1] <==
	I1013 22:11:04.513645       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:11:04.513668       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:11:04.573021       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:11:04.573044       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:11:04.573052       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:11:04.757717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:11:04.759484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:11:04.759704       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:11:04.759721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:11:04.782088       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:11:04.782305       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:11:04.855057       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:11:04.912234       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:11:04.921508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:11:05.029159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:11:05.874675       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:11:06.286479       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:11:06.456304       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:11:06.505722       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:11:06.884159       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.85.243"}
	I1013 22:11:07.001466       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.68.106"}
	I1013 22:11:10.253704       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:11:10.261137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:11:10.350153       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:11:10.354476       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [cbd66f7e4aa28f071adbbc9c82004ce2e6b5d7758657b18b8705a841c024a4f4] <==
	I1013 22:11:09.922545       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 22:11:09.927834       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:11:09.928212       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:11:09.930590       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:11:09.931873       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:11:09.935084       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:11:09.938405       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:11:09.945656       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:11:09.955837       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:11:09.956938       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:11:09.956996       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:11:09.957764       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:11:09.961591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:11:09.963527       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:11:09.967986       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:11:09.972161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:11:09.980037       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:11:09.980083       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:11:09.980097       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:11:09.992704       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:11:09.992755       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:11:09.995920       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:10.065627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:10.065731       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:11:10.065766       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b] <==
	I1013 22:11:07.403107       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:11:07.508347       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:11:07.612588       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:11:07.612713       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 22:11:07.612829       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:11:09.277331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:11:09.277397       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:11:09.820561       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:11:09.820891       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:11:09.820912       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:09.899435       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:11:09.899457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:11:09.899852       1 config.go:200] "Starting service config controller"
	I1013 22:11:09.899860       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:11:09.900142       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:11:09.900148       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:11:09.905178       1 config.go:309] "Starting node config controller"
	I1013 22:11:09.905198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:11:09.905205       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:11:10.034792       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:11:10.036554       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:11:10.051956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d7f909f4526bb28a23b8c602214df3866cb2dc9cc28c820891839a829fc81270] <==
	I1013 22:11:01.536048       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:11:09.592222       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:11:09.592261       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:09.656733       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:11:09.656820       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:11:09.656837       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:11:09.656875       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:11:09.657493       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:09.657512       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:09.657562       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:09.657580       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:09.856902       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:11:09.857777       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:09.881439       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:11:00 newest-cni-400889 kubelet[728]: E1013 22:11:00.438515     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-400889\" not found" node="newest-cni-400889"
	Oct 13 22:11:00 newest-cni-400889 kubelet[728]: E1013 22:11:00.608369     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-400889\" not found" node="newest-cni-400889"
	Oct 13 22:11:03 newest-cni-400889 kubelet[728]: I1013 22:11:03.629138     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-400889"
	Oct 13 22:11:03 newest-cni-400889 kubelet[728]: I1013 22:11:03.710899     728 apiserver.go:52] "Watching apiserver"
	Oct 13 22:11:03 newest-cni-400889 kubelet[728]: I1013 22:11:03.925593     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718721     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0608056-bfa9-46cf-a6c4-da63c05dc51a-lib-modules\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718787     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-cni-cfg\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718811     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0608056-bfa9-46cf-a6c4-da63c05dc51a-xtables-lock\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718845     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-xtables-lock\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718862     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-lib-modules\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.103641     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.103913     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-400889\" already exists" pod="kube-system/etcd-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.103936     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.111353     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.111467     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.111500     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.117644     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.235054     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-400889\" already exists" pod="kube-system/kube-apiserver-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.235085     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.280764     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-400889\" already exists" pod="kube-system/kube-controller-manager-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.280797     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.456024     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-400889\" already exists" pod="kube-system/kube-scheduler-newest-cni-400889"
	Oct 13 22:11:09 newest-cni-400889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:11:09 newest-cni-400889 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:11:09 newest-cni-400889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-400889 -n newest-cni-400889
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-400889 -n newest-cni-400889: exit status 2 (398.661488ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-400889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk: exit status 1 (132.710282ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-cc4wf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-w6242" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5p8tk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-400889
helpers_test.go:243: (dbg) docker inspect newest-cni-400889:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda",
	        "Created": "2025-10-13T22:10:05.991697046Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208054,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:10:46.460409905Z",
	            "FinishedAt": "2025-10-13T22:10:45.51934127Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/hostname",
	        "HostsPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/hosts",
	        "LogPath": "/var/lib/docker/containers/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda/327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda-json.log",
	        "Name": "/newest-cni-400889",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-400889:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-400889",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "327a4b5bba33839b9e9e68b98fb65f311428d8ca9a779bf2218877efb3db1dda",
	                "LowerDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2ce8a657d5380be77a974a499c284981153c449892cad04318c236219fcf9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-400889",
	                "Source": "/var/lib/docker/volumes/newest-cni-400889/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-400889",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-400889",
	                "name.minikube.sigs.k8s.io": "newest-cni-400889",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e369104e511244431739c735dc726ef3b54e13c953cdf37a3e751c29cd7d98e2",
	            "SandboxKey": "/var/run/docker/netns/e369104e5112",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-400889": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:e2:c3:67:52:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d596263e55a2c1a0ad1158c1d748ddecdc9ebcca3cfd3b93c9472d82661a4237",
	                    "EndpointID": "e8618627f391bd15a3f4db4869a7108578189c258bbd38d5fb9d7d5ba1e1fc42",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-400889",
	                        "327a4b5bba33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889: exit status 2 (382.465987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-400889 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-400889 logs -n 25: (1.362026418s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-251758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ stop    │ -p embed-certs-251758 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ start   │ -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ image   │ no-preload-998398 image list --format=json                                                                                                                                                                                                    │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:08 UTC │
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:10 UTC │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-007533 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p newest-cni-400889 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable dashboard -p newest-cni-400889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-007533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ image   │ newest-cni-400889 image list --format=json                                                                                                                                                                                                    │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ pause   │ -p newest-cni-400889 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:10:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:10:49.201763  208589 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:10:49.201894  208589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:49.201905  208589 out.go:374] Setting ErrFile to fd 2...
	I1013 22:10:49.201909  208589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:10:49.202206  208589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:10:49.202571  208589 out.go:368] Setting JSON to false
	I1013 22:10:49.203437  208589 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6784,"bootTime":1760386666,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:10:49.203501  208589 start.go:141] virtualization:  
	I1013 22:10:49.206276  208589 out.go:179] * [default-k8s-diff-port-007533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:10:49.209956  208589 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:10:49.210009  208589 notify.go:220] Checking for updates...
	I1013 22:10:49.216483  208589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:10:49.219307  208589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:49.222297  208589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:10:49.225184  208589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:10:49.228184  208589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:10:49.231388  208589 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:49.232052  208589 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:10:49.252685  208589 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:10:49.252796  208589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:49.314498  208589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:10:49.305250888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:49.314612  208589 docker.go:318] overlay module found
	I1013 22:10:49.317840  208589 out.go:179] * Using the docker driver based on existing profile
	I1013 22:10:49.320599  208589 start.go:305] selected driver: docker
	I1013 22:10:49.320620  208589 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:49.320726  208589 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:10:49.321418  208589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:10:49.375566  208589 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:10:49.366993565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:10:49.375930  208589 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:10:49.375964  208589 cni.go:84] Creating CNI manager for ""
	I1013 22:10:49.376023  208589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:49.376062  208589 start.go:349] cluster config:
	{Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:49.381184  208589 out.go:179] * Starting "default-k8s-diff-port-007533" primary control-plane node in "default-k8s-diff-port-007533" cluster
	I1013 22:10:49.383986  208589 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:10:49.386867  208589 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:10:49.389709  208589 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:49.389759  208589 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:10:49.389772  208589 cache.go:58] Caching tarball of preloaded images
	I1013 22:10:49.389809  208589 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:10:49.389858  208589 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:10:49.389869  208589 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:10:49.390003  208589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:10:49.409485  208589 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:10:49.409506  208589 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:10:49.409542  208589 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:10:49.409571  208589 start.go:360] acquireMachinesLock for default-k8s-diff-port-007533: {Name:mk990b5defb290df24f36fb536d48d3275652286 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:10:49.409625  208589 start.go:364] duration metric: took 32.762µs to acquireMachinesLock for "default-k8s-diff-port-007533"
	I1013 22:10:49.409648  208589 start.go:96] Skipping create...Using existing machine configuration
	I1013 22:10:49.409661  208589 fix.go:54] fixHost starting: 
	I1013 22:10:49.409903  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:49.426175  208589 fix.go:112] recreateIfNeeded on default-k8s-diff-port-007533: state=Stopped err=<nil>
	W1013 22:10:49.426204  208589 fix.go:138] unexpected machine state, will restart: <nil>
	I1013 22:10:46.428979  207923 out.go:252] * Restarting existing docker container for "newest-cni-400889" ...
	I1013 22:10:46.429076  207923 cli_runner.go:164] Run: docker start newest-cni-400889
	I1013 22:10:46.685088  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:46.706065  207923 kic.go:430] container "newest-cni-400889" state is running.
	I1013 22:10:46.708068  207923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:46.734974  207923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/config.json ...
	I1013 22:10:46.735197  207923 machine.go:93] provisionDockerMachine start ...
	I1013 22:10:46.735429  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:46.759945  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:46.760252  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:46.760262  207923 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:10:46.761035  207923 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:10:49.923873  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:49.923959  207923 ubuntu.go:182] provisioning hostname "newest-cni-400889"
	I1013 22:10:49.924052  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:49.948327  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:49.948686  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:49.948728  207923 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-400889 && echo "newest-cni-400889" | sudo tee /etc/hostname
	I1013 22:10:50.147332  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-400889
	
	I1013 22:10:50.147475  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:50.178758  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:50.179075  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:50.179094  207923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-400889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-400889/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-400889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:10:50.332142  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:10:50.332170  207923 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:10:50.332193  207923 ubuntu.go:190] setting up certificates
	I1013 22:10:50.332205  207923 provision.go:84] configureAuth start
	I1013 22:10:50.332264  207923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:50.349907  207923 provision.go:143] copyHostCerts
	I1013 22:10:50.349970  207923 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:10:50.349996  207923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:10:50.350077  207923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:10:50.350177  207923 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:10:50.350187  207923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:10:50.350219  207923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:10:50.350273  207923 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:10:50.350284  207923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:10:50.350308  207923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:10:50.350355  207923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.newest-cni-400889 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-400889]
	I1013 22:10:51.819381  207923 provision.go:177] copyRemoteCerts
	I1013 22:10:51.819472  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:10:51.819520  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:51.837391  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:51.941181  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:10:51.957921  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1013 22:10:51.975247  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:10:51.992135  207923 provision.go:87] duration metric: took 1.659906869s to configureAuth
	I1013 22:10:51.992162  207923 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:10:51.992386  207923 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:51.992505  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.012670  207923 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:52.013000  207923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1013 22:10:52.013021  207923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:10:52.307539  207923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:10:52.307561  207923 machine.go:96] duration metric: took 5.57235436s to provisionDockerMachine
	I1013 22:10:52.307571  207923 start.go:293] postStartSetup for "newest-cni-400889" (driver="docker")
	I1013 22:10:52.307583  207923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:10:52.307657  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:10:52.307695  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.324260  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.427651  207923 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:10:52.431030  207923 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:10:52.431059  207923 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:10:52.431070  207923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:10:52.431125  207923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:10:52.431210  207923 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:10:52.431318  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:10:52.439129  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:52.456960  207923 start.go:296] duration metric: took 149.373081ms for postStartSetup
	I1013 22:10:52.457054  207923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:10:52.457109  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.475070  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.572733  207923 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:10:52.577441  207923 fix.go:56] duration metric: took 6.167799717s for fixHost
	I1013 22:10:52.577466  207923 start.go:83] releasing machines lock for "newest-cni-400889", held for 6.167850661s
	I1013 22:10:52.577539  207923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-400889
	I1013 22:10:52.594674  207923 ssh_runner.go:195] Run: cat /version.json
	I1013 22:10:52.594703  207923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:10:52.594729  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.594760  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:52.615708  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.616079  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:52.715338  207923 ssh_runner.go:195] Run: systemctl --version
	I1013 22:10:52.722178  207923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:10:52.829759  207923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:10:52.834454  207923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:10:52.834524  207923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:10:52.842950  207923 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:10:52.842974  207923 start.go:495] detecting cgroup driver to use...
	I1013 22:10:52.843006  207923 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:10:52.843061  207923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:10:52.858462  207923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:10:52.872048  207923 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:10:52.872164  207923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:10:52.888280  207923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:10:52.901611  207923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:10:53.049664  207923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:10:53.187507  207923 docker.go:234] disabling docker service ...
	I1013 22:10:53.187582  207923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:10:53.206596  207923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:10:53.223551  207923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:10:53.360492  207923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:10:53.505968  207923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:10:53.520660  207923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:10:53.541512  207923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:10:53.541583  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.554551  207923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:10:53.554617  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.566727  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.579652  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.590191  207923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:10:53.603558  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.612209  207923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.620009  207923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:53.627983  207923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:10:53.635011  207923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:10:53.642008  207923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:53.777214  207923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:10:53.945194  207923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:10:53.945273  207923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:10:53.950847  207923 start.go:563] Will wait 60s for crictl version
	I1013 22:10:53.950906  207923 ssh_runner.go:195] Run: which crictl
	I1013 22:10:53.958153  207923 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:10:54.008311  207923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:10:54.008414  207923 ssh_runner.go:195] Run: crio --version
	I1013 22:10:54.045586  207923 ssh_runner.go:195] Run: crio --version
	I1013 22:10:54.094907  207923 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:10:54.097629  207923 cli_runner.go:164] Run: docker network inspect newest-cni-400889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:54.119697  207923 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:10:54.123443  207923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:54.135744  207923 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1013 22:10:49.429346  208589 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-007533" ...
	I1013 22:10:49.429439  208589 cli_runner.go:164] Run: docker start default-k8s-diff-port-007533
	I1013 22:10:49.681999  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:49.704933  208589 kic.go:430] container "default-k8s-diff-port-007533" state is running.
	I1013 22:10:49.705319  208589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:10:49.724140  208589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/config.json ...
	I1013 22:10:49.724369  208589 machine.go:93] provisionDockerMachine start ...
	I1013 22:10:49.724445  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:49.745661  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:49.745975  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:49.745994  208589 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:10:49.747465  208589 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1013 22:10:52.907381  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:10:52.907414  208589 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-007533"
	I1013 22:10:52.907480  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:52.929300  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:52.929610  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:52.929627  208589 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-007533 && echo "default-k8s-diff-port-007533" | sudo tee /etc/hostname
	I1013 22:10:53.119423  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-007533
	
	I1013 22:10:53.119579  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:53.145508  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:53.145823  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:53.145841  208589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-007533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-007533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-007533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:10:53.295958  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:10:53.295979  208589 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:10:53.295997  208589 ubuntu.go:190] setting up certificates
	I1013 22:10:53.296007  208589 provision.go:84] configureAuth start
	I1013 22:10:53.296077  208589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:10:53.313586  208589 provision.go:143] copyHostCerts
	I1013 22:10:53.313646  208589 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:10:53.313662  208589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:10:53.313733  208589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:10:53.313831  208589 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:10:53.313836  208589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:10:53.313861  208589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:10:53.313924  208589 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:10:53.313928  208589 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:10:53.313951  208589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:10:53.314003  208589 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-007533 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-007533 localhost minikube]
	I1013 22:10:54.039583  208589 provision.go:177] copyRemoteCerts
	I1013 22:10:54.039697  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:10:54.039787  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.061098  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:54.167740  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:10:54.187005  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1013 22:10:54.138411  207923 kubeadm.go:883] updating cluster {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:10:54.138544  207923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:54.138623  207923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:54.179026  207923 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:54.179046  207923 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:10:54.179103  207923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:54.214312  207923 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:54.214389  207923 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:10:54.214412  207923 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:10:54.214549  207923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-400889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:10:54.214638  207923 ssh_runner.go:195] Run: crio config
	I1013 22:10:54.314857  207923 cni.go:84] Creating CNI manager for ""
	I1013 22:10:54.314880  207923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:54.314903  207923 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1013 22:10:54.314931  207923 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-400889 NodeName:newest-cni-400889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:10:54.315053  207923 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-400889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:10:54.315119  207923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:10:54.327079  207923 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:10:54.327152  207923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:10:54.335281  207923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1013 22:10:54.349772  207923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:10:54.366006  207923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1013 22:10:54.382687  207923 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:10:54.386051  207923 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:54.395105  207923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:54.535334  207923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:54.554994  207923 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889 for IP: 192.168.85.2
	I1013 22:10:54.555017  207923 certs.go:195] generating shared ca certs ...
	I1013 22:10:54.555035  207923 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:54.555173  207923 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:10:54.555235  207923 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:10:54.555245  207923 certs.go:257] generating profile certs ...
	I1013 22:10:54.555327  207923 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/client.key
	I1013 22:10:54.555393  207923 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key.58b80bf4
	I1013 22:10:54.555434  207923 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key
	I1013 22:10:54.555552  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:10:54.555587  207923 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:10:54.555599  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:10:54.555624  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:10:54.555651  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:10:54.555683  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:10:54.555730  207923 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:54.556403  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:10:54.597734  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:10:54.664803  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:10:54.697571  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:10:54.719822  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1013 22:10:54.750988  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1013 22:10:54.776949  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:10:54.810269  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/newest-cni-400889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1013 22:10:54.855660  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:10:54.878583  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:10:54.904223  207923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:10:54.926069  207923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:10:54.942267  207923 ssh_runner.go:195] Run: openssl version
	I1013 22:10:54.949276  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:10:54.957562  207923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:10:54.961480  207923 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:10:54.961541  207923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:10:55.010279  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:10:55.021300  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:10:55.032220  207923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:10:55.038656  207923 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:10:55.038732  207923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:10:55.096259  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:10:55.110031  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:10:55.140295  207923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:55.157135  207923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:55.157219  207923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:55.216659  207923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:10:55.226292  207923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:10:55.231893  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:10:55.304205  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:10:55.389692  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:10:55.487661  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:10:55.556647  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:10:55.753356  207923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:10:55.974260  207923 kubeadm.go:400] StartCluster: {Name:newest-cni-400889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-400889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:55.974346  207923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:10:55.974416  207923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:10:56.045465  207923 cri.go:89] found id: "29059c40b00ad04ad44738293f0b2017c88e1b61ccaeaff02d2db844814fa5f1"
	I1013 22:10:56.045494  207923 cri.go:89] found id: "cbd66f7e4aa28f071adbbc9c82004ce2e6b5d7758657b18b8705a841c024a4f4"
	I1013 22:10:56.045499  207923 cri.go:89] found id: "d7f909f4526bb28a23b8c602214df3866cb2dc9cc28c820891839a829fc81270"
	I1013 22:10:56.045503  207923 cri.go:89] found id: "41f03a8f2cf4c63c92345cd9504c3c2a0150d9555a97a54de9a811e88e7eb0f6"
	I1013 22:10:56.045507  207923 cri.go:89] found id: ""
	I1013 22:10:56.045565  207923 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:10:56.082973  207923 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:10:56Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:10:56.083086  207923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:10:56.101552  207923 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:10:56.101581  207923 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:10:56.101644  207923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:10:56.118379  207923 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:10:56.118932  207923 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-400889" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:56.119066  207923 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-2495/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-400889" cluster setting kubeconfig missing "newest-cni-400889" context setting]
	I1013 22:10:56.119407  207923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:56.121206  207923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:10:56.134016  207923 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1013 22:10:56.134048  207923 kubeadm.go:601] duration metric: took 32.460427ms to restartPrimaryControlPlane
	I1013 22:10:56.134069  207923 kubeadm.go:402] duration metric: took 159.818435ms to StartCluster
	I1013 22:10:56.134088  207923 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:56.134173  207923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:56.135009  207923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:56.135318  207923 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:10:56.135931  207923 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:10:56.136051  207923 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-400889"
	I1013 22:10:56.136065  207923 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-400889"
	W1013 22:10:56.136071  207923 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:10:56.136095  207923 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:56.136684  207923 addons.go:69] Setting dashboard=true in profile "newest-cni-400889"
	I1013 22:10:56.136715  207923 addons.go:238] Setting addon dashboard=true in "newest-cni-400889"
	W1013 22:10:56.136726  207923 addons.go:247] addon dashboard should already be in state true
	I1013 22:10:56.136764  207923 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:56.136816  207923 config.go:182] Loaded profile config "newest-cni-400889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:56.137227  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.137294  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.138894  207923 addons.go:69] Setting default-storageclass=true in profile "newest-cni-400889"
	I1013 22:10:56.138916  207923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-400889"
	I1013 22:10:56.139476  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.147843  207923 out.go:179] * Verifying Kubernetes components...
	I1013 22:10:56.151581  207923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:56.180697  207923 addons.go:238] Setting addon default-storageclass=true in "newest-cni-400889"
	W1013 22:10:56.180721  207923 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:10:56.180744  207923 host.go:66] Checking if "newest-cni-400889" exists ...
	I1013 22:10:56.181167  207923 cli_runner.go:164] Run: docker container inspect newest-cni-400889 --format={{.State.Status}}
	I1013 22:10:56.216080  207923 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 22:10:56.219169  207923 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 22:10:56.219348  207923 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:10:54.210901  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1013 22:10:54.231140  208589 provision.go:87] duration metric: took 935.11087ms to configureAuth
	I1013 22:10:54.231163  208589 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:10:54.231346  208589 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:54.231438  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.249260  208589 main.go:141] libmachine: Using SSH client type: native
	I1013 22:10:54.249566  208589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1013 22:10:54.249581  208589 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:10:54.637039  208589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:10:54.637100  208589 machine.go:96] duration metric: took 4.91271327s to provisionDockerMachine
	I1013 22:10:54.637142  208589 start.go:293] postStartSetup for "default-k8s-diff-port-007533" (driver="docker")
	I1013 22:10:54.637179  208589 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:10:54.637268  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:10:54.637331  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.663897  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:54.794380  208589 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:10:54.798094  208589 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:10:54.798119  208589 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:10:54.798130  208589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:10:54.798182  208589 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:10:54.798267  208589 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:10:54.798368  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:10:54.808936  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:54.836214  208589 start.go:296] duration metric: took 199.045534ms for postStartSetup
	I1013 22:10:54.836333  208589 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:10:54.836421  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.857310  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:54.969272  208589 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:10:54.974468  208589 fix.go:56] duration metric: took 5.564804544s for fixHost
	I1013 22:10:54.974497  208589 start.go:83] releasing machines lock for "default-k8s-diff-port-007533", held for 5.5648606s
	I1013 22:10:54.974577  208589 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-007533
	I1013 22:10:54.997424  208589 ssh_runner.go:195] Run: cat /version.json
	I1013 22:10:54.997479  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:54.997712  208589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:10:54.997771  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:55.045183  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:55.059996  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:55.176566  208589 ssh_runner.go:195] Run: systemctl --version
	I1013 22:10:55.292221  208589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:10:55.343619  208589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:10:55.349850  208589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:10:55.349931  208589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:10:55.358116  208589 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1013 22:10:55.358139  208589 start.go:495] detecting cgroup driver to use...
	I1013 22:10:55.358171  208589 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:10:55.358238  208589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:10:55.374322  208589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:10:55.392310  208589 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:10:55.392413  208589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:10:55.420448  208589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:10:55.439988  208589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:10:55.611005  208589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:10:55.874196  208589 docker.go:234] disabling docker service ...
	I1013 22:10:55.874346  208589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:10:55.907645  208589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:10:55.930037  208589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:10:56.265364  208589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:10:56.568554  208589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:10:56.591948  208589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:10:56.613372  208589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:10:56.613433  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.639001  208589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:10:56.639072  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.664947  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.684552  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.700063  208589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:10:56.715563  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.727734  208589 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.746489  208589 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:10:56.757597  208589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:10:56.773146  208589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:10:56.786877  208589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:57.005391  208589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:10:57.199574  208589 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:10:57.199719  208589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:10:57.213721  208589 start.go:563] Will wait 60s for crictl version
	I1013 22:10:57.213861  208589 ssh_runner.go:195] Run: which crictl
	I1013 22:10:57.218507  208589 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:10:57.270598  208589 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:10:57.270688  208589 ssh_runner.go:195] Run: crio --version
	I1013 22:10:57.326670  208589 ssh_runner.go:195] Run: crio --version
	I1013 22:10:57.403454  208589 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:10:57.406297  208589 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-007533 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:10:57.433691  208589 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1013 22:10:57.439894  208589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:57.451035  208589 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:10:57.451159  208589 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:10:57.451221  208589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:57.518190  208589 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:57.518274  208589 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:10:57.518371  208589 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:10:57.586742  208589 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:10:57.586764  208589 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:10:57.586772  208589 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1013 22:10:57.586875  208589 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-007533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:10:57.586962  208589 ssh_runner.go:195] Run: crio config
	I1013 22:10:57.676611  208589 cni.go:84] Creating CNI manager for ""
	I1013 22:10:57.676635  208589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:10:57.676654  208589 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:10:57.676696  208589 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-007533 NodeName:default-k8s-diff-port-007533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:10:57.676853  208589 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-007533"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:10:57.676957  208589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:10:57.689549  208589 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:10:57.689648  208589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:10:57.699818  208589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1013 22:10:57.714868  208589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:10:57.741666  208589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1013 22:10:57.765866  208589 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:10:57.770037  208589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:10:57.789122  208589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:58.027078  208589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:58.047739  208589 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533 for IP: 192.168.76.2
	I1013 22:10:58.047762  208589 certs.go:195] generating shared ca certs ...
	I1013 22:10:58.047804  208589 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:58.047968  208589 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:10:58.048033  208589 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:10:58.048054  208589 certs.go:257] generating profile certs ...
	I1013 22:10:58.048169  208589 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.key
	I1013 22:10:58.048257  208589 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key.e8d90e38
	I1013 22:10:58.048326  208589 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key
	I1013 22:10:58.048475  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:10:58.048531  208589 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:10:58.048547  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:10:58.048573  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:10:58.048634  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:10:58.048663  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:10:58.048729  208589 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:10:58.049332  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:10:58.110912  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:10:58.162538  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:10:58.197602  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:10:58.245425  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1013 22:10:58.284987  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:10:58.338793  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:10:58.375748  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:10:58.395998  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:10:58.420941  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:10:58.453039  208589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:10:58.487334  208589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:10:58.505584  208589 ssh_runner.go:195] Run: openssl version
	I1013 22:10:58.516517  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:10:58.525613  208589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:10:58.532102  208589 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:10:58.532208  208589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:10:58.606241  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:10:58.615375  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:10:58.633175  208589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:10:58.637865  208589 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:10:58.637956  208589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:10:58.696712  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:10:58.705350  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:10:58.714979  208589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:58.719907  208589 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:58.720008  208589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:10:58.812495  208589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:10:58.837043  208589 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:10:58.853654  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1013 22:10:58.945608  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1013 22:10:59.030267  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1013 22:10:59.245322  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1013 22:10:59.375654  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1013 22:10:59.509656  208589 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1013 22:10:59.619729  208589 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-007533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-007533 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:10:59.619910  208589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:10:59.620017  208589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:10:59.759246  208589 cri.go:89] found id: "3970a5fddb4ed9fafd03a56430ff0a855693ce410e375d5e6f5b23115bdec4fe"
	I1013 22:10:59.759328  208589 cri.go:89] found id: "5bbc4021a2610d0a72615ef54a61b83477debc9e67e27338b6ebdad10f29a7bb"
	I1013 22:10:59.759348  208589 cri.go:89] found id: "bd56c0184294021b9044112e6391397dce68b76fd94d9c861cdd5ada9d399899"
	I1013 22:10:59.759364  208589 cri.go:89] found id: "99b9c491479a5e957e19a4c1ca9d1a62f9cde3467897c3b831fc01afd815b1f7"
	I1013 22:10:59.759381  208589 cri.go:89] found id: ""
	I1013 22:10:59.759474  208589 ssh_runner.go:195] Run: sudo runc list -f json
	W1013 22:10:59.788952  208589 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:10:59Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:10:59.789075  208589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:10:59.819475  208589 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1013 22:10:59.819557  208589 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1013 22:10:59.819638  208589 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1013 22:10:59.844038  208589 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1013 22:10:59.844785  208589 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-007533" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:59.845176  208589 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-2495/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-007533" cluster setting kubeconfig missing "default-k8s-diff-port-007533" context setting]
	I1013 22:10:59.845982  208589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:59.848157  208589 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1013 22:10:59.864277  208589 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1013 22:10:59.864373  208589 kubeadm.go:601] duration metric: took 44.796703ms to restartPrimaryControlPlane
	I1013 22:10:59.864398  208589 kubeadm.go:402] duration metric: took 244.685898ms to StartCluster
	I1013 22:10:59.864452  208589 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:59.864569  208589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:10:59.865707  208589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:10:59.866031  208589 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:10:59.866537  208589 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:10:59.866620  208589 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-007533"
	I1013 22:10:59.866639  208589 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-007533"
	W1013 22:10:59.866646  208589 addons.go:247] addon storage-provisioner should already be in state true
	I1013 22:10:59.866671  208589 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:10:59.867309  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.867693  208589 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:10:59.867816  208589 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-007533"
	I1013 22:10:59.867853  208589 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-007533"
	W1013 22:10:59.867881  208589 addons.go:247] addon dashboard should already be in state true
	I1013 22:10:59.867927  208589 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:10:59.868452  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.872934  208589 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-007533"
	I1013 22:10:59.873199  208589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-007533"
	I1013 22:10:59.873149  208589 out.go:179] * Verifying Kubernetes components...
	I1013 22:10:59.880222  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.882022  208589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:10:59.925446  208589 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:10:59.928468  208589 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:59.928495  208589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:10:59.928567  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:59.931864  208589 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1013 22:10:59.939877  208589 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1013 22:10:56.222159  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:10:56.222188  207923 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:10:56.222292  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:56.227993  207923 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:56.228029  207923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:10:56.228114  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:56.265831  207923 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:10:56.265860  207923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:10:56.265911  207923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-400889
	I1013 22:10:56.289367  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:56.300468  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:56.329960  207923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/newest-cni-400889/id_rsa Username:docker}
	I1013 22:10:56.615938  207923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:10:56.657049  207923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:10:56.764657  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:10:56.764745  207923 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:10:56.794684  207923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:10:56.881178  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:10:56.881256  207923 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:10:56.980381  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:10:56.980474  207923 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:10:57.061027  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:10:57.061089  207923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:10:57.142140  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:10:57.142227  207923 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:10:57.188234  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:10:57.188320  207923 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:10:57.215282  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:10:57.215359  207923 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:10:57.245227  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:10:57.245327  207923 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:10:57.263250  207923 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:10:57.263333  207923 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:10:57.304471  207923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:10:59.941301  208589 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-007533"
	W1013 22:10:59.941321  208589 addons.go:247] addon default-storageclass should already be in state true
	I1013 22:10:59.941357  208589 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:10:59.941808  208589 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:10:59.949621  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1013 22:10:59.949656  208589 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1013 22:10:59.949720  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:10:59.976042  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:10:59.990831  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:11:00.000668  208589 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:00.000689  208589 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:11:00.000771  208589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:11:00.094535  208589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:11:00.663807  208589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:11:00.693522  208589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:00.756986  208589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:11:00.788498  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1013 22:11:00.788571  208589 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1013 22:11:00.809014  208589 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:11:00.900877  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1013 22:11:00.900947  208589 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1013 22:11:00.964752  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1013 22:11:00.964821  208589 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1013 22:11:01.004434  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1013 22:11:01.004469  208589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1013 22:11:01.064598  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1013 22:11:01.064621  208589 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1013 22:11:01.144314  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1013 22:11:01.144341  208589 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1013 22:11:01.229062  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1013 22:11:01.229090  208589 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1013 22:11:01.324295  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1013 22:11:01.324322  208589 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1013 22:11:01.385269  208589 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:11:01.385294  208589 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1013 22:11:01.422781  208589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1013 22:11:06.912137  207923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.296091554s)
	I1013 22:11:06.912195  207923 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.255071903s)
	I1013 22:11:06.912229  207923 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:11:06.912288  207923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:11:06.912360  207923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.117600723s)
	I1013 22:11:07.024930  207923 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.720359653s)
	I1013 22:11:07.025175  207923 api_server.go:72] duration metric: took 10.889805157s to wait for apiserver process to appear ...
	I1013 22:11:07.025225  207923 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:11:07.025262  207923 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:11:07.028829  207923 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-400889 addons enable metrics-server
	
	I1013 22:11:07.031720  207923 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1013 22:11:07.034637  207923 addons.go:514] duration metric: took 10.898698839s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1013 22:11:07.055246  207923 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1013 22:11:07.055284  207923 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1013 22:11:07.525720  207923 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1013 22:11:07.546066  207923 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1013 22:11:07.547446  207923 api_server.go:141] control plane version: v1.34.1
	I1013 22:11:07.547480  207923 api_server.go:131] duration metric: took 522.23014ms to wait for apiserver health ...
	I1013 22:11:07.547489  207923 system_pods.go:43] waiting for kube-system pods to appear ...
	I1013 22:11:07.562338  207923 system_pods.go:59] 8 kube-system pods found
	I1013 22:11:07.562378  207923 system_pods.go:61] "coredns-66bc5c9577-cc4wf" [0bf2694d-f251-4b5b-86fc-6dfc45fe88c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:11:07.562409  207923 system_pods.go:61] "etcd-newest-cni-400889" [67dc0b91-0ac5-4923-a944-5f2dd99ad833] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1013 22:11:07.562422  207923 system_pods.go:61] "kindnet-k8zlc" [bce90592-0127-4946-bc83-a6b06490dcc1] Running
	I1013 22:11:07.562450  207923 system_pods.go:61] "kube-apiserver-newest-cni-400889" [bd2c7b07-69bf-43b7-ba7a-1002daf22666] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1013 22:11:07.562461  207923 system_pods.go:61] "kube-controller-manager-newest-cni-400889" [0f7464a5-ac8f-49fb-92cb-42bedd0068ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1013 22:11:07.562467  207923 system_pods.go:61] "kube-proxy-2c8dd" [e0608056-bfa9-46cf-a6c4-da63c05dc51a] Running
	I1013 22:11:07.562502  207923 system_pods.go:61] "kube-scheduler-newest-cni-400889" [8d46c2a1-3b0a-4b30-8143-d2fa1d20f276] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1013 22:11:07.562515  207923 system_pods.go:61] "storage-provisioner" [d60a2a57-2585-4721-aab0-cd73fa7bf7f0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1013 22:11:07.562522  207923 system_pods.go:74] duration metric: took 15.011146ms to wait for pod list to return data ...
	I1013 22:11:07.562534  207923 default_sa.go:34] waiting for default service account to be created ...
	I1013 22:11:07.570171  207923 default_sa.go:45] found service account: "default"
	I1013 22:11:07.570198  207923 default_sa.go:55] duration metric: took 7.657742ms for default service account to be created ...
	I1013 22:11:07.570212  207923 kubeadm.go:586] duration metric: took 11.434842265s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1013 22:11:07.570252  207923 node_conditions.go:102] verifying NodePressure condition ...
	I1013 22:11:07.574141  207923 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1013 22:11:07.574175  207923 node_conditions.go:123] node cpu capacity is 2
	I1013 22:11:07.574187  207923 node_conditions.go:105] duration metric: took 3.930001ms to run NodePressure ...
	I1013 22:11:07.574199  207923 start.go:241] waiting for startup goroutines ...
	I1013 22:11:07.574239  207923 start.go:246] waiting for cluster config update ...
	I1013 22:11:07.574251  207923 start.go:255] writing updated cluster config ...
	I1013 22:11:07.574554  207923 ssh_runner.go:195] Run: rm -f paused
	I1013 22:11:07.686699  207923 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:11:07.691738  207923 out.go:179] * Done! kubectl is now configured to use "newest-cni-400889" cluster and "default" namespace by default
	I1013 22:11:07.424856  208589 node_ready.go:49] node "default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:07.424884  208589 node_ready.go:38] duration metric: took 6.615803519s for node "default-k8s-diff-port-007533" to be "Ready" ...
	I1013 22:11:07.424897  208589 api_server.go:52] waiting for apiserver process to appear ...
	I1013 22:11:07.424952  208589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 22:11:08.442320  208589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.748766949s)
	I1013 22:11:11.093609  208589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.336591313s)
	I1013 22:11:11.163108  208589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.740286327s)
	I1013 22:11:11.163270  208589 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.738300589s)
	I1013 22:11:11.163284  208589 api_server.go:72] duration metric: took 11.297201991s to wait for apiserver process to appear ...
	I1013 22:11:11.163290  208589 api_server.go:88] waiting for apiserver healthz status ...
	I1013 22:11:11.163308  208589 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1013 22:11:11.166178  208589 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-007533 addons enable metrics-server
	
	I1013 22:11:11.169066  208589 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	
	
	==> CRI-O <==
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.407240421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.414828174Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2c8dd/POD" id=d1562e3a-c338-4cfb-ad18-7859d96b9586 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.414897071Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.429884916Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ad17b549-9ba9-41e0-a7b3-0d62af7e86dd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.460877407Z" level=info msg="Ran pod sandbox b8a215e93d9a8896042575b36024873fefdf435d6f98ee1d429208396ba864ac with infra container: kube-system/kindnet-k8zlc/POD" id=ad17b549-9ba9-41e0-a7b3-0d62af7e86dd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.486594177Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3850c13d-efa3-43d1-baf7-09eb021c8b56 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.487683298Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1a257365-972a-4143-8d2a-e97d1a7568c3 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.468094914Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d1562e3a-c338-4cfb-ad18-7859d96b9586 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.512895941Z" level=info msg="Creating container: kube-system/kindnet-k8zlc/kindnet-cni" id=791caa31-476b-4c06-a3b4-a9940b85bda6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.51333323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.534734491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.541216451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.608454651Z" level=info msg="Ran pod sandbox aa5a0d3a5b2ccaff2b3bf9f9081ec522b8c9333e4be30519389ae0765e01ee51 with infra container: kube-system/kube-proxy-2c8dd/POD" id=d1562e3a-c338-4cfb-ad18-7859d96b9586 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.628403502Z" level=info msg="Created container 2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016: kube-system/kindnet-k8zlc/kindnet-cni" id=791caa31-476b-4c06-a3b4-a9940b85bda6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.640658977Z" level=info msg="Starting container: 2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016" id=83c28aef-f74f-48b1-9b17-4e2c82902015 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.652662848Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d1f59d5f-010f-4e86-9095-7db86e9dbb33 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.65478601Z" level=info msg="Started container" PID=1058 containerID=2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016 description=kube-system/kindnet-k8zlc/kindnet-cni id=83c28aef-f74f-48b1-9b17-4e2c82902015 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8a215e93d9a8896042575b36024873fefdf435d6f98ee1d429208396ba864ac
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.688158725Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8a931a27-abb2-4eb2-8454-06a908182226 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.702583761Z" level=info msg="Creating container: kube-system/kube-proxy-2c8dd/kube-proxy" id=47e7e6d4-3d62-433f-95d1-655b0cca27af name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.702864369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.726124504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.744996016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.880910967Z" level=info msg="Created container bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b: kube-system/kube-proxy-2c8dd/kube-proxy" id=47e7e6d4-3d62-433f-95d1-655b0cca27af name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.881704375Z" level=info msg="Starting container: bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b" id=6ecac661-e773-45f4-b932-99bbd14026bc name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:11:05 newest-cni-400889 crio[610]: time="2025-10-13T22:11:05.8903956Z" level=info msg="Started container" PID=1068 containerID=bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b description=kube-system/kube-proxy-2c8dd/kube-proxy id=6ecac661-e773-45f4-b932-99bbd14026bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa5a0d3a5b2ccaff2b3bf9f9081ec522b8c9333e4be30519389ae0765e01ee51
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bec5eebd1eb82       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   aa5a0d3a5b2cc       kube-proxy-2c8dd                            kube-system
	2ce0da1c78d96       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 seconds ago       Running             kindnet-cni               1                   b8a215e93d9a8       kindnet-k8zlc                               kube-system
	29059c40b00ad       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago      Running             kube-apiserver            1                   7f11434468299       kube-apiserver-newest-cni-400889            kube-system
	cbd66f7e4aa28       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago      Running             kube-controller-manager   1                   91c8509d5061e       kube-controller-manager-newest-cni-400889   kube-system
	d7f909f4526bb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago      Running             kube-scheduler            1                   4da440eaab414       kube-scheduler-newest-cni-400889            kube-system
	41f03a8f2cf4c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago      Running             etcd                      1                   acb0a3a944b50       etcd-newest-cni-400889                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-400889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-400889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=newest-cni-400889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_10_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:10:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-400889
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:11:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 13 Oct 2025 22:11:05 +0000   Mon, 13 Oct 2025 22:10:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-400889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9d5f35dbbd74029b922b46f57d6faf8
	  System UUID:                081b52d5-83f5-4259-9831-31b23d524c2c
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-400889                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-k8zlc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-newest-cni-400889             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-400889    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-2c8dd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-newest-cni-400889             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 31s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     38s                kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           34s                node-controller  Node newest-cni-400889 event: Registered Node newest-cni-400889 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node newest-cni-400889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x8 over 20s)  kubelet          Node newest-cni-400889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-400889 event: Registered Node newest-cni-400889 in Controller
	
	
	==> dmesg <==
	[  +7.684868] overlayfs: idmapped layers are currently not supported
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	[Oct13 22:10] overlayfs: idmapped layers are currently not supported
	[ +26.243538] overlayfs: idmapped layers are currently not supported
	[  +3.497977] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [41f03a8f2cf4c63c92345cd9504c3c2a0150d9555a97a54de9a811e88e7eb0f6] <==
	{"level":"warn","ts":"2025-10-13T22:10:59.773017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:59.840483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:10:59.932053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.116101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.400005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.453685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.516426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.596831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.650105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.707445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.746155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.766679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.809920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.848040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.908978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:59746: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-13T22:11:00.940730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:00.985040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.020811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.057317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.080026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.115599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.136759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.166315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.206044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:01.437703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59908","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:11:14 up  1:53,  0 user,  load average: 6.06, 3.72, 2.63
	Linux newest-cni-400889 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ce0da1c78d96e3b60d9dcbced297952f5c4e36a1407c9551108d004be8d0016] <==
	I1013 22:11:05.880151       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:11:05.893198       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1013 22:11:05.893300       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:11:05.893312       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:11:05.893326       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:11:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:11:06.141048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:11:06.141277       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:11:06.141294       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:11:06.141508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [29059c40b00ad04ad44738293f0b2017c88e1b61ccaeaff02d2db844814fa5f1] <==
	I1013 22:11:04.513645       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:11:04.513668       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:11:04.573021       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:11:04.573044       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:11:04.573052       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:11:04.757717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1013 22:11:04.759484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1013 22:11:04.759704       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:11:04.759721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:11:04.782088       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:11:04.782305       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1013 22:11:04.855057       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1013 22:11:04.912234       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:11:04.921508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1013 22:11:05.029159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1013 22:11:05.874675       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:11:06.286479       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:11:06.456304       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:11:06.505722       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:11:06.884159       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.85.243"}
	I1013 22:11:07.001466       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.68.106"}
	I1013 22:11:10.253704       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:11:10.261137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:11:10.350153       1 controller.go:667] quota admission added evaluator for: endpoints
	I1013 22:11:10.354476       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [cbd66f7e4aa28f071adbbc9c82004ce2e6b5d7758657b18b8705a841c024a4f4] <==
	I1013 22:11:09.922545       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1013 22:11:09.927834       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1013 22:11:09.928212       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:11:09.930590       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1013 22:11:09.931873       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:11:09.935084       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1013 22:11:09.938405       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1013 22:11:09.945656       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1013 22:11:09.955837       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1013 22:11:09.956938       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1013 22:11:09.956996       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:11:09.957764       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:11:09.961591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:11:09.963527       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1013 22:11:09.967986       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1013 22:11:09.972161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1013 22:11:09.980037       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:11:09.980083       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:11:09.980097       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:11:09.992704       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1013 22:11:09.992755       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:11:09.995920       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:10.065627       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:10.065731       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:11:10.065766       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [bec5eebd1eb8276e9c73cbe2f0b464d2ff14db348e354fdc02e6dc9d6b2c215b] <==
	I1013 22:11:07.403107       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:11:07.508347       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:11:07.612588       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:11:07.612713       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1013 22:11:07.612829       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:11:09.277331       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:11:09.277397       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:11:09.820561       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:11:09.820891       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:11:09.820912       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:09.899435       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:11:09.899457       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:11:09.899852       1 config.go:200] "Starting service config controller"
	I1013 22:11:09.899860       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:11:09.900142       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:11:09.900148       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:11:09.905178       1 config.go:309] "Starting node config controller"
	I1013 22:11:09.905198       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:11:09.905205       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:11:10.034792       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:11:10.036554       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1013 22:11:10.051956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d7f909f4526bb28a23b8c602214df3866cb2dc9cc28c820891839a829fc81270] <==
	I1013 22:11:01.536048       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:11:09.592222       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:11:09.592261       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:09.656733       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:11:09.656820       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:11:09.656837       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:11:09.656875       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:11:09.657493       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:09.657512       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:09.657562       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:09.657580       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:09.856902       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:11:09.857777       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:09.881439       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:11:00 newest-cni-400889 kubelet[728]: E1013 22:11:00.438515     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-400889\" not found" node="newest-cni-400889"
	Oct 13 22:11:00 newest-cni-400889 kubelet[728]: E1013 22:11:00.608369     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-400889\" not found" node="newest-cni-400889"
	Oct 13 22:11:03 newest-cni-400889 kubelet[728]: I1013 22:11:03.629138     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-400889"
	Oct 13 22:11:03 newest-cni-400889 kubelet[728]: I1013 22:11:03.710899     728 apiserver.go:52] "Watching apiserver"
	Oct 13 22:11:03 newest-cni-400889 kubelet[728]: I1013 22:11:03.925593     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718721     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0608056-bfa9-46cf-a6c4-da63c05dc51a-lib-modules\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718787     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-cni-cfg\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718811     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0608056-bfa9-46cf-a6c4-da63c05dc51a-xtables-lock\") pod \"kube-proxy-2c8dd\" (UID: \"e0608056-bfa9-46cf-a6c4-da63c05dc51a\") " pod="kube-system/kube-proxy-2c8dd"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718845     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-xtables-lock\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:11:04 newest-cni-400889 kubelet[728]: I1013 22:11:04.718862     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bce90592-0127-4946-bc83-a6b06490dcc1-lib-modules\") pod \"kindnet-k8zlc\" (UID: \"bce90592-0127-4946-bc83-a6b06490dcc1\") " pod="kube-system/kindnet-k8zlc"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.103641     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.103913     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-400889\" already exists" pod="kube-system/etcd-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.103936     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.111353     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.111467     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.111500     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.117644     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.235054     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-400889\" already exists" pod="kube-system/kube-apiserver-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.235085     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.280764     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-400889\" already exists" pod="kube-system/kube-controller-manager-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: I1013 22:11:05.280797     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-400889"
	Oct 13 22:11:05 newest-cni-400889 kubelet[728]: E1013 22:11:05.456024     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-400889\" already exists" pod="kube-system/kube-scheduler-newest-cni-400889"
	Oct 13 22:11:09 newest-cni-400889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:11:09 newest-cni-400889 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:11:09 newest-cni-400889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-400889 -n newest-cni-400889
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-400889 -n newest-cni-400889: exit status 2 (463.170599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-400889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk: exit status 1 (120.606769ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-cc4wf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-w6242" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5p8tk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-400889 describe pod coredns-66bc5c9577-cc4wf storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w6242 kubernetes-dashboard-855c9754f9-5p8tk: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-007533 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-007533 --alsologtostderr -v=1: exit status 80 (2.203434133s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-007533 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 22:11:56.312796  215475 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:11:56.312960  215475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:56.312968  215475 out.go:374] Setting ErrFile to fd 2...
	I1013 22:11:56.312972  215475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:56.313229  215475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:11:56.313465  215475 out.go:368] Setting JSON to false
	I1013 22:11:56.313480  215475 mustload.go:65] Loading cluster: default-k8s-diff-port-007533
	I1013 22:11:56.313848  215475 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:56.314314  215475 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-007533 --format={{.State.Status}}
	I1013 22:11:56.335651  215475 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:11:56.335997  215475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:11:56.455703  215475 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 22:11:56.444850526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:11:56.457161  215475 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-007533 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1013 22:11:56.461985  215475 out.go:179] * Pausing node default-k8s-diff-port-007533 ... 
	I1013 22:11:56.465475  215475 host.go:66] Checking if "default-k8s-diff-port-007533" exists ...
	I1013 22:11:56.465797  215475 ssh_runner.go:195] Run: systemctl --version
	I1013 22:11:56.465846  215475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-007533
	I1013 22:11:56.493456  215475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/default-k8s-diff-port-007533/id_rsa Username:docker}
	I1013 22:11:56.608306  215475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:11:56.631933  215475 pause.go:52] kubelet running: true
	I1013 22:11:56.632011  215475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:11:57.039126  215475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:11:57.039219  215475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:11:57.163311  215475 cri.go:89] found id: "55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999"
	I1013 22:11:57.163345  215475 cri.go:89] found id: "6a5df3c5a9045027560adb2e9d88517dd47a910cecaaaaec5cf2423307ae5e71"
	I1013 22:11:57.163351  215475 cri.go:89] found id: "2619fffe3a121a9831056e97ad35ee96fa24908d3db94f825e51faa63ed6a795"
	I1013 22:11:57.163355  215475 cri.go:89] found id: "91481353c67cc68bb4298db655c0ba872a70547e325a61bce015a48724300e10"
	I1013 22:11:57.163358  215475 cri.go:89] found id: "b7a49ab1e9406cec6e4d3573a11414997615cb5773b9431d80fda6e6f6b41fa8"
	I1013 22:11:57.163361  215475 cri.go:89] found id: "3970a5fddb4ed9fafd03a56430ff0a855693ce410e375d5e6f5b23115bdec4fe"
	I1013 22:11:57.163364  215475 cri.go:89] found id: "5bbc4021a2610d0a72615ef54a61b83477debc9e67e27338b6ebdad10f29a7bb"
	I1013 22:11:57.163368  215475 cri.go:89] found id: "bd56c0184294021b9044112e6391397dce68b76fd94d9c861cdd5ada9d399899"
	I1013 22:11:57.163372  215475 cri.go:89] found id: "99b9c491479a5e957e19a4c1ca9d1a62f9cde3467897c3b831fc01afd815b1f7"
	I1013 22:11:57.163377  215475 cri.go:89] found id: "abf6fe6a0c2b477370ce1b10a59d5afef966b3d2278b2343bb8f29356a375406"
	I1013 22:11:57.163381  215475 cri.go:89] found id: "c4bacb88f25bc5a14376dfc758244b8c2ccbf962e0fd287744b5751bf14025f0"
	I1013 22:11:57.163384  215475 cri.go:89] found id: ""
	I1013 22:11:57.163437  215475 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:11:57.180154  215475 retry.go:31] will retry after 331.999383ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:57Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:11:57.512407  215475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:11:57.528633  215475 pause.go:52] kubelet running: false
	I1013 22:11:57.528695  215475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:11:57.743582  215475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:11:57.743654  215475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:11:57.809427  215475 cri.go:89] found id: "55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999"
	I1013 22:11:57.809447  215475 cri.go:89] found id: "6a5df3c5a9045027560adb2e9d88517dd47a910cecaaaaec5cf2423307ae5e71"
	I1013 22:11:57.809452  215475 cri.go:89] found id: "2619fffe3a121a9831056e97ad35ee96fa24908d3db94f825e51faa63ed6a795"
	I1013 22:11:57.809456  215475 cri.go:89] found id: "91481353c67cc68bb4298db655c0ba872a70547e325a61bce015a48724300e10"
	I1013 22:11:57.809459  215475 cri.go:89] found id: "b7a49ab1e9406cec6e4d3573a11414997615cb5773b9431d80fda6e6f6b41fa8"
	I1013 22:11:57.809462  215475 cri.go:89] found id: "3970a5fddb4ed9fafd03a56430ff0a855693ce410e375d5e6f5b23115bdec4fe"
	I1013 22:11:57.809466  215475 cri.go:89] found id: "5bbc4021a2610d0a72615ef54a61b83477debc9e67e27338b6ebdad10f29a7bb"
	I1013 22:11:57.809469  215475 cri.go:89] found id: "bd56c0184294021b9044112e6391397dce68b76fd94d9c861cdd5ada9d399899"
	I1013 22:11:57.809480  215475 cri.go:89] found id: "99b9c491479a5e957e19a4c1ca9d1a62f9cde3467897c3b831fc01afd815b1f7"
	I1013 22:11:57.809487  215475 cri.go:89] found id: "abf6fe6a0c2b477370ce1b10a59d5afef966b3d2278b2343bb8f29356a375406"
	I1013 22:11:57.809490  215475 cri.go:89] found id: "c4bacb88f25bc5a14376dfc758244b8c2ccbf962e0fd287744b5751bf14025f0"
	I1013 22:11:57.809493  215475 cri.go:89] found id: ""
	I1013 22:11:57.809542  215475 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:11:57.820458  215475 retry.go:31] will retry after 330.825394ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:57Z" level=error msg="open /run/runc: no such file or directory"
	I1013 22:11:58.151879  215475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 22:11:58.164929  215475 pause.go:52] kubelet running: false
	I1013 22:11:58.164998  215475 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1013 22:11:58.330041  215475 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1013 22:11:58.330148  215475 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1013 22:11:58.408407  215475 cri.go:89] found id: "55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999"
	I1013 22:11:58.408428  215475 cri.go:89] found id: "6a5df3c5a9045027560adb2e9d88517dd47a910cecaaaaec5cf2423307ae5e71"
	I1013 22:11:58.408433  215475 cri.go:89] found id: "2619fffe3a121a9831056e97ad35ee96fa24908d3db94f825e51faa63ed6a795"
	I1013 22:11:58.408437  215475 cri.go:89] found id: "91481353c67cc68bb4298db655c0ba872a70547e325a61bce015a48724300e10"
	I1013 22:11:58.408440  215475 cri.go:89] found id: "b7a49ab1e9406cec6e4d3573a11414997615cb5773b9431d80fda6e6f6b41fa8"
	I1013 22:11:58.408443  215475 cri.go:89] found id: "3970a5fddb4ed9fafd03a56430ff0a855693ce410e375d5e6f5b23115bdec4fe"
	I1013 22:11:58.408446  215475 cri.go:89] found id: "5bbc4021a2610d0a72615ef54a61b83477debc9e67e27338b6ebdad10f29a7bb"
	I1013 22:11:58.408450  215475 cri.go:89] found id: "bd56c0184294021b9044112e6391397dce68b76fd94d9c861cdd5ada9d399899"
	I1013 22:11:58.408454  215475 cri.go:89] found id: "99b9c491479a5e957e19a4c1ca9d1a62f9cde3467897c3b831fc01afd815b1f7"
	I1013 22:11:58.408469  215475 cri.go:89] found id: "abf6fe6a0c2b477370ce1b10a59d5afef966b3d2278b2343bb8f29356a375406"
	I1013 22:11:58.408483  215475 cri.go:89] found id: "c4bacb88f25bc5a14376dfc758244b8c2ccbf962e0fd287744b5751bf14025f0"
	I1013 22:11:58.408487  215475 cri.go:89] found id: ""
	I1013 22:11:58.408541  215475 ssh_runner.go:195] Run: sudo runc list -f json
	I1013 22:11:58.425208  215475 out.go:203] 
	W1013 22:11:58.428102  215475 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T22:11:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1013 22:11:58.428125  215475 out.go:285] * 
	* 
	W1013 22:11:58.434257  215475 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1013 22:11:58.437143  215475 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-007533 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-007533
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-007533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f",
	        "Created": "2025-10-13T22:09:09.643322038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:10:49.464822718Z",
	            "FinishedAt": "2025-10-13T22:10:48.68038616Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/hosts",
	        "LogPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f-json.log",
	        "Name": "/default-k8s-diff-port-007533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-007533:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-007533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f",
	                "LowerDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-007533",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-007533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-007533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-007533",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-007533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43a204ddcc6f897ab1aa422931fed29e2f2a2b7f1b724af05bb618655af48148",
	            "SandboxKey": "/var/run/docker/netns/43a204ddcc6f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-007533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:8a:11:e4:c2:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c207adec0a146b3ee3021b2c1eb78ecdd6cde3a3946c5c593fd373dfc1a3d79d",
	                    "EndpointID": "c538c0e4c52446b8f994909f1609277901ad2cfeb3e8ba2463cbd49f9d884665",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-007533",
	                        "42b7859eebb1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533: exit status 2 (361.750105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-007533 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-007533 logs -n 25: (1.516567797s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:10 UTC │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-007533 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p newest-cni-400889 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable dashboard -p newest-cni-400889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-007533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:11 UTC │
	│ image   │ newest-cni-400889 image list --format=json                                                                                                                                                                                                    │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ pause   │ -p newest-cni-400889 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	│ delete  │ -p newest-cni-400889                                                                                                                                                                                                                          │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ delete  │ -p newest-cni-400889                                                                                                                                                                                                                          │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ start   │ -p auto-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-122822                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	│ image   │ default-k8s-diff-port-007533 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ pause   │ -p default-k8s-diff-port-007533 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:11:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:11:18.641464  213009 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:11:18.641578  213009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:18.641583  213009 out.go:374] Setting ErrFile to fd 2...
	I1013 22:11:18.641587  213009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:18.641925  213009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:11:18.642397  213009 out.go:368] Setting JSON to false
	I1013 22:11:18.643318  213009 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6813,"bootTime":1760386666,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:11:18.643408  213009 start.go:141] virtualization:  
	I1013 22:11:18.647934  213009 out.go:179] * [auto-122822] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:11:18.652898  213009 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:11:18.652944  213009 notify.go:220] Checking for updates...
	I1013 22:11:18.661555  213009 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:11:18.665039  213009 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:11:18.668424  213009 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:11:18.672238  213009 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:11:18.675948  213009 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:11:18.680216  213009 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:18.680352  213009 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:11:18.716032  213009 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:11:18.716148  213009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:11:18.828418  213009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:11:18.814430305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:11:18.828539  213009 docker.go:318] overlay module found
	I1013 22:11:18.833204  213009 out.go:179] * Using the docker driver based on user configuration
	I1013 22:11:18.836511  213009 start.go:305] selected driver: docker
	I1013 22:11:18.836534  213009 start.go:925] validating driver "docker" against <nil>
	I1013 22:11:18.836547  213009 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:11:18.837295  213009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:11:18.932702  213009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:11:18.921171572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:11:18.932859  213009 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:11:18.933158  213009 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:11:18.936888  213009 out.go:179] * Using Docker driver with root privileges
	I1013 22:11:18.940714  213009 cni.go:84] Creating CNI manager for ""
	I1013 22:11:18.940785  213009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:11:18.940800  213009 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:11:18.940879  213009 start.go:349] cluster config:
	{Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1013 22:11:18.944337  213009 out.go:179] * Starting "auto-122822" primary control-plane node in "auto-122822" cluster
	I1013 22:11:18.947512  213009 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:11:18.950814  213009 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:11:18.953931  213009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:11:18.953984  213009 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:11:18.954000  213009 cache.go:58] Caching tarball of preloaded images
	I1013 22:11:18.954079  213009 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:11:18.954089  213009 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:11:18.954199  213009 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/config.json ...
	I1013 22:11:18.954215  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/config.json: {Name:mkddd2f2811ae79867dec424e9f6cd31b3ebf145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:18.954345  213009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:11:18.984028  213009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:11:18.984047  213009 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:11:18.984065  213009 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:11:18.984088  213009 start.go:360] acquireMachinesLock for auto-122822: {Name:mka0d1339a97472877dad96588ce1f47613d1d53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:11:18.984185  213009 start.go:364] duration metric: took 81.705µs to acquireMachinesLock for "auto-122822"
	I1013 22:11:18.984210  213009 start.go:93] Provisioning new machine with config: &{Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:11:18.984280  213009 start.go:125] createHost starting for "" (driver="docker")
	W1013 22:11:16.246881  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:18.254581  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:18.989121  213009 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:11:18.989331  213009 start.go:159] libmachine.API.Create for "auto-122822" (driver="docker")
	I1013 22:11:18.989367  213009 client.go:168] LocalClient.Create starting
	I1013 22:11:18.989430  213009 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:11:18.989461  213009 main.go:141] libmachine: Decoding PEM data...
	I1013 22:11:18.989482  213009 main.go:141] libmachine: Parsing certificate...
	I1013 22:11:18.989540  213009 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:11:18.989557  213009 main.go:141] libmachine: Decoding PEM data...
	I1013 22:11:18.989567  213009 main.go:141] libmachine: Parsing certificate...
	I1013 22:11:18.989906  213009 cli_runner.go:164] Run: docker network inspect auto-122822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:11:19.010616  213009 cli_runner.go:211] docker network inspect auto-122822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:11:19.010700  213009 network_create.go:284] running [docker network inspect auto-122822] to gather additional debugging logs...
	I1013 22:11:19.010722  213009 cli_runner.go:164] Run: docker network inspect auto-122822
	W1013 22:11:19.032524  213009 cli_runner.go:211] docker network inspect auto-122822 returned with exit code 1
	I1013 22:11:19.032559  213009 network_create.go:287] error running [docker network inspect auto-122822]: docker network inspect auto-122822: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-122822 not found
	I1013 22:11:19.032572  213009 network_create.go:289] output of [docker network inspect auto-122822]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-122822 not found
	
	** /stderr **
	I1013 22:11:19.032678  213009 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:11:19.056983  213009 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:11:19.057327  213009 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:11:19.057644  213009 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:11:19.057883  213009 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c207adec0a14 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:30:41:df:49:ee} reservation:<nil>}
	I1013 22:11:19.058261  213009 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a629a0}
	I1013 22:11:19.058282  213009 network_create.go:124] attempt to create docker network auto-122822 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:11:19.058344  213009 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-122822 auto-122822
	I1013 22:11:19.111264  213009 network_create.go:108] docker network auto-122822 192.168.85.0/24 created
	I1013 22:11:19.111291  213009 kic.go:121] calculated static IP "192.168.85.2" for the "auto-122822" container
	I1013 22:11:19.111362  213009 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:11:19.135936  213009 cli_runner.go:164] Run: docker volume create auto-122822 --label name.minikube.sigs.k8s.io=auto-122822 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:11:19.153756  213009 oci.go:103] Successfully created a docker volume auto-122822
	I1013 22:11:19.153833  213009 cli_runner.go:164] Run: docker run --rm --name auto-122822-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-122822 --entrypoint /usr/bin/test -v auto-122822:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:11:19.937998  213009 oci.go:107] Successfully prepared a docker volume auto-122822
	I1013 22:11:19.938051  213009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:11:19.938071  213009 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:11:19.938151  213009 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-122822:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 22:11:20.744028  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:22.749499  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:25.345591  213009 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-122822:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.407405914s)
	I1013 22:11:25.345632  213009 kic.go:203] duration metric: took 5.407557877s to extract preloaded images to volume ...
	W1013 22:11:25.345776  213009 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:11:25.345888  213009 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:11:25.437937  213009 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-122822 --name auto-122822 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-122822 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-122822 --network auto-122822 --ip 192.168.85.2 --volume auto-122822:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:11:25.929344  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Running}}
	I1013 22:11:25.953523  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:25.988849  213009 cli_runner.go:164] Run: docker exec auto-122822 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:11:26.057082  213009 oci.go:144] the created container "auto-122822" has a running status.
	I1013 22:11:26.057120  213009 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa...
	I1013 22:11:26.704836  213009 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:11:26.728568  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:26.752121  213009 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:11:26.752220  213009 kic_runner.go:114] Args: [docker exec --privileged auto-122822 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:11:26.821844  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:26.847070  213009 machine.go:93] provisionDockerMachine start ...
	I1013 22:11:26.847154  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:26.877586  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:26.877912  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:26.877923  213009 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:11:26.882739  213009 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1013 22:11:24.772310  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:27.243733  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:30.032904  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-122822
	
	I1013 22:11:30.032930  213009 ubuntu.go:182] provisioning hostname "auto-122822"
	I1013 22:11:30.033006  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:30.063553  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:30.063899  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:30.063915  213009 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-122822 && echo "auto-122822" | sudo tee /etc/hostname
	I1013 22:11:30.230112  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-122822
	
	I1013 22:11:30.230183  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:30.254884  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:30.255194  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:30.255216  213009 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-122822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-122822/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-122822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:11:30.399907  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:11:30.399933  213009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:11:30.399952  213009 ubuntu.go:190] setting up certificates
	I1013 22:11:30.399961  213009 provision.go:84] configureAuth start
	I1013 22:11:30.400017  213009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-122822
	I1013 22:11:30.418873  213009 provision.go:143] copyHostCerts
	I1013 22:11:30.418949  213009 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:11:30.418963  213009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:11:30.419043  213009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:11:30.419142  213009 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:11:30.419154  213009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:11:30.419181  213009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:11:30.419235  213009 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:11:30.419244  213009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:11:30.419269  213009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:11:30.419322  213009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.auto-122822 san=[127.0.0.1 192.168.85.2 auto-122822 localhost minikube]
	I1013 22:11:31.111667  213009 provision.go:177] copyRemoteCerts
	I1013 22:11:31.111753  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:11:31.111853  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.131573  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.235483  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:11:31.256482  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 22:11:31.275515  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:11:31.294388  213009 provision.go:87] duration metric: took 894.404674ms to configureAuth
	I1013 22:11:31.294450  213009 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:11:31.294666  213009 config.go:182] Loaded profile config "auto-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:31.294800  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.311489  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:31.311839  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:31.311861  213009 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:11:31.570527  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:11:31.570547  213009 machine.go:96] duration metric: took 4.723458388s to provisionDockerMachine
	I1013 22:11:31.570557  213009 client.go:171] duration metric: took 12.581184171s to LocalClient.Create
	I1013 22:11:31.570575  213009 start.go:167] duration metric: took 12.58124579s to libmachine.API.Create "auto-122822"
	I1013 22:11:31.570583  213009 start.go:293] postStartSetup for "auto-122822" (driver="docker")
	I1013 22:11:31.570593  213009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:11:31.570659  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:11:31.570701  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.588588  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.691651  213009 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:11:31.695058  213009 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:11:31.695088  213009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:11:31.695100  213009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:11:31.695170  213009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:11:31.695280  213009 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:11:31.695418  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:11:31.703265  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:11:31.726367  213009 start.go:296] duration metric: took 155.769671ms for postStartSetup
	I1013 22:11:31.726718  213009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-122822
	I1013 22:11:31.752523  213009 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/config.json ...
	I1013 22:11:31.752791  213009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:11:31.752843  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.770384  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.868686  213009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:11:31.873100  213009 start.go:128] duration metric: took 12.888806197s to createHost
	I1013 22:11:31.873121  213009 start.go:83] releasing machines lock for "auto-122822", held for 12.888928031s
	I1013 22:11:31.873190  213009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-122822
	I1013 22:11:31.891887  213009 ssh_runner.go:195] Run: cat /version.json
	I1013 22:11:31.891944  213009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:11:31.891954  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.892010  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.915854  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.924098  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:32.109612  213009 ssh_runner.go:195] Run: systemctl --version
	I1013 22:11:32.117312  213009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:11:32.156270  213009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:11:32.160562  213009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:11:32.160657  213009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:11:32.190824  213009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:11:32.190885  213009 start.go:495] detecting cgroup driver to use...
	I1013 22:11:32.190940  213009 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:11:32.191024  213009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:11:32.213190  213009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:11:32.226666  213009 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:11:32.226729  213009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:11:32.249418  213009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:11:32.266520  213009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:11:32.400517  213009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:11:32.531125  213009 docker.go:234] disabling docker service ...
	I1013 22:11:32.531219  213009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:11:32.552370  213009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:11:32.567361  213009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:11:32.697991  213009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:11:32.829095  213009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:11:32.842442  213009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:11:32.857901  213009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:11:32.857986  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.866903  213009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:11:32.867009  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.876649  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.885501  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.894890  213009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:11:32.903144  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.911989  213009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.925823  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.934579  213009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:11:32.942282  213009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:11:32.956236  213009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:11:33.073387  213009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:11:33.217357  213009 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:11:33.217479  213009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:11:33.221677  213009 start.go:563] Will wait 60s for crictl version
	I1013 22:11:33.221823  213009 ssh_runner.go:195] Run: which crictl
	I1013 22:11:33.226216  213009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:11:33.259560  213009 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:11:33.259725  213009 ssh_runner.go:195] Run: crio --version
	I1013 22:11:33.295264  213009 ssh_runner.go:195] Run: crio --version
	I1013 22:11:33.328328  213009 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:11:33.331157  213009 cli_runner.go:164] Run: docker network inspect auto-122822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:11:33.347565  213009 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:11:33.351552  213009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:11:33.360941  213009 kubeadm.go:883] updating cluster {Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:11:33.361056  213009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:11:33.361120  213009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:11:33.393457  213009 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:11:33.393481  213009 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:11:33.393532  213009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:11:33.419481  213009 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:11:33.419506  213009 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:11:33.419521  213009 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:11:33.419615  213009 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-122822 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:11:33.419715  213009 ssh_runner.go:195] Run: crio config
	I1013 22:11:33.480554  213009 cni.go:84] Creating CNI manager for ""
	I1013 22:11:33.480578  213009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:11:33.480600  213009 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:11:33.480623  213009 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-122822 NodeName:auto-122822 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:11:33.480750  213009 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-122822"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:11:33.480818  213009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:11:33.488465  213009 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:11:33.488580  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:11:33.496032  213009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1013 22:11:33.509870  213009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:11:33.522758  213009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1013 22:11:33.535505  213009 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:11:33.538816  213009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:11:33.548436  213009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1013 22:11:29.742543  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:31.745582  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:33.749276  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:33.670479  213009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:11:33.687503  213009 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822 for IP: 192.168.85.2
	I1013 22:11:33.687522  213009 certs.go:195] generating shared ca certs ...
	I1013 22:11:33.687541  213009 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:33.687691  213009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:11:33.687729  213009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:11:33.687736  213009 certs.go:257] generating profile certs ...
	I1013 22:11:33.687847  213009 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.key
	I1013 22:11:33.687868  213009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt with IP's: []
	I1013 22:11:34.102970  213009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt ...
	I1013 22:11:34.103006  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: {Name:mk35c08afb3d37df981ceacf86559e2e7099c846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.103245  213009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.key ...
	I1013 22:11:34.103265  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.key: {Name:mk604b8114fe0926b3be098ec32c6b552a0cba5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.103393  213009 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f
	I1013 22:11:34.103414  213009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:11:34.620299  213009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f ...
	I1013 22:11:34.620333  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f: {Name:mk14740cfd52891948b9ab2ec8d503d0c00264eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.620528  213009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f ...
	I1013 22:11:34.620544  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f: {Name:mka5fb5f70d766bcab1695323b9758ddfa229912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.620629  213009 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt
	I1013 22:11:34.620706  213009 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key
	I1013 22:11:34.620769  213009 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key
	I1013 22:11:34.620787  213009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt with IP's: []
	I1013 22:11:35.448282  213009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt ...
	I1013 22:11:35.448314  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt: {Name:mk78fed5d1f2512d84e91354fec660186bec6c00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:35.448509  213009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key ...
	I1013 22:11:35.448522  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key: {Name:mk248c97bad9b465f89aeabe0eda4c2b67d3cddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:35.448720  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:11:35.448763  213009 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:11:35.448777  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:11:35.448803  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:11:35.448829  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:11:35.448859  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:11:35.448904  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:11:35.449563  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:11:35.468050  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:11:35.488555  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:11:35.510558  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:11:35.529628  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 22:11:35.549851  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:11:35.569164  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:11:35.587514  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:11:35.605219  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:11:35.623205  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:11:35.642438  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:11:35.660646  213009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:11:35.673634  213009 ssh_runner.go:195] Run: openssl version
	I1013 22:11:35.680113  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:11:35.688621  213009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:11:35.692827  213009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:11:35.692895  213009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:11:35.735994  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:11:35.746142  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:11:35.754292  213009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:11:35.758671  213009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:11:35.758777  213009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:11:35.799994  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:11:35.808204  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:11:35.816966  213009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:11:35.820864  213009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:11:35.820962  213009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:11:35.863219  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:11:35.875654  213009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:11:35.880738  213009 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:11:35.880819  213009 kubeadm.go:400] StartCluster: {Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:11:35.880899  213009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:11:35.880975  213009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:11:35.908354  213009 cri.go:89] found id: ""
	I1013 22:11:35.908489  213009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:11:35.916073  213009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:11:35.924630  213009 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:11:35.924709  213009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:11:35.936266  213009 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:11:35.936287  213009 kubeadm.go:157] found existing configuration files:
	
	I1013 22:11:35.936336  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:11:35.948118  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:11:35.948181  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:11:35.956720  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:11:35.965789  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:11:35.965853  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:11:35.975468  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:11:35.985689  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:11:35.985751  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:11:35.994003  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:11:36.001699  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:11:36.001806  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:11:36.012827  213009 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:11:36.063426  213009 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:11:36.063516  213009 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:11:36.091598  213009 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:11:36.091706  213009 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:11:36.091773  213009 kubeadm.go:318] OS: Linux
	I1013 22:11:36.091891  213009 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:11:36.091962  213009 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:11:36.092030  213009 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:11:36.092101  213009 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:11:36.092178  213009 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:11:36.092258  213009 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:11:36.092330  213009 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:11:36.092402  213009 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:11:36.092473  213009 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:11:36.163542  213009 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:11:36.163693  213009 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:11:36.163839  213009 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:11:36.172357  213009 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:11:36.178480  213009 out.go:252]   - Generating certificates and keys ...
	I1013 22:11:36.178582  213009 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:11:36.178656  213009 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:11:36.864525  213009 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:11:36.923267  213009 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:11:36.982704  213009 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:11:37.671066  213009 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:11:38.287529  213009 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:11:38.287882  213009 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-122822 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:11:38.501602  213009 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:11:38.501920  213009 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-122822 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1013 22:11:36.246344  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:38.743340  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:38.677585  213009 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:11:38.937290  213009 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:11:39.063703  213009 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:11:39.064022  213009 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:11:39.253856  213009 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:11:39.701462  213009 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:11:40.640966  213009 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:11:41.092788  213009 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:11:41.509040  213009 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:11:41.510014  213009 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:11:41.512903  213009 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1013 22:11:40.744147  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:41.743855  208589 pod_ready.go:94] pod "coredns-66bc5c9577-vftdh" is "Ready"
	I1013 22:11:41.743880  208589 pod_ready.go:86] duration metric: took 30.006264171s for pod "coredns-66bc5c9577-vftdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.746617  208589 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.751309  208589 pod_ready.go:94] pod "etcd-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:41.751332  208589 pod_ready.go:86] duration metric: took 4.686387ms for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.753787  208589 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.758595  208589 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:41.758636  208589 pod_ready.go:86] duration metric: took 4.812841ms for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.761123  208589 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.941517  208589 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:41.941545  208589 pod_ready.go:86] duration metric: took 180.401197ms for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:42.153498  208589 pod_ready.go:83] waiting for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:42.542024  208589 pod_ready.go:94] pod "kube-proxy-5947n" is "Ready"
	I1013 22:11:42.542048  208589 pod_ready.go:86] duration metric: took 388.517012ms for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:42.743603  208589 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:43.142288  208589 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:43.142380  208589 pod_ready.go:86] duration metric: took 398.748447ms for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:43.142404  208589 pod_ready.go:40] duration metric: took 31.410791404s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:11:43.198834  208589 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:11:43.202288  208589 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-007533" cluster and "default" namespace by default
	I1013 22:11:41.516615  213009 out.go:252]   - Booting up control plane ...
	I1013 22:11:41.516721  213009 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:11:41.516803  213009 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:11:41.516873  213009 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:11:41.532227  213009 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:11:41.532619  213009 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:11:41.540999  213009 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:11:41.541378  213009 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:11:41.541632  213009 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:11:41.680257  213009 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:11:41.680386  213009 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:11:43.179947  213009 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501782364s
	I1013 22:11:43.188295  213009 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:11:43.188395  213009 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:11:43.189604  213009 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:11:43.189701  213009 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:11:47.103419  213009 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.913884435s
	I1013 22:11:49.218368  213009 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.029085359s
	I1013 22:11:49.692767  213009 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502164102s
	I1013 22:11:49.728741  213009 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:11:49.750493  213009 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:11:49.774635  213009 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:11:49.774854  213009 kubeadm.go:318] [mark-control-plane] Marking the node auto-122822 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:11:49.788093  213009 kubeadm.go:318] [bootstrap-token] Using token: z4rcal.1hgvybjffvqspgx8
	I1013 22:11:49.790945  213009 out.go:252]   - Configuring RBAC rules ...
	I1013 22:11:49.791081  213009 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:11:49.797725  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:11:49.805980  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:11:49.814654  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:11:49.819693  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:11:49.824467  213009 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:11:50.100686  213009 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:11:50.530156  213009 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:11:51.098974  213009 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:11:51.100325  213009 kubeadm.go:318] 
	I1013 22:11:51.100407  213009 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:11:51.100417  213009 kubeadm.go:318] 
	I1013 22:11:51.100508  213009 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:11:51.100518  213009 kubeadm.go:318] 
	I1013 22:11:51.100565  213009 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:11:51.100643  213009 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:11:51.100698  213009 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:11:51.100710  213009 kubeadm.go:318] 
	I1013 22:11:51.100777  213009 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:11:51.100782  213009 kubeadm.go:318] 
	I1013 22:11:51.100838  213009 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:11:51.100843  213009 kubeadm.go:318] 
	I1013 22:11:51.100898  213009 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:11:51.100977  213009 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:11:51.101050  213009 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:11:51.101054  213009 kubeadm.go:318] 
	I1013 22:11:51.101142  213009 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:11:51.101224  213009 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:11:51.101229  213009 kubeadm.go:318] 
	I1013 22:11:51.101322  213009 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z4rcal.1hgvybjffvqspgx8 \
	I1013 22:11:51.101433  213009 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:11:51.101454  213009 kubeadm.go:318] 	--control-plane 
	I1013 22:11:51.101459  213009 kubeadm.go:318] 
	I1013 22:11:51.101580  213009 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:11:51.101602  213009 kubeadm.go:318] 
	I1013 22:11:51.101714  213009 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z4rcal.1hgvybjffvqspgx8 \
	I1013 22:11:51.101829  213009 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:11:51.105573  213009 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:11:51.105822  213009 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:11:51.105943  213009 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:11:51.105964  213009 cni.go:84] Creating CNI manager for ""
	I1013 22:11:51.105980  213009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:11:51.109118  213009 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:11:51.112084  213009 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:11:51.116426  213009 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:11:51.116449  213009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:11:51.132432  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:11:51.466568  213009 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:11:51.466695  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:51.466781  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-122822 minikube.k8s.io/updated_at=2025_10_13T22_11_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=auto-122822 minikube.k8s.io/primary=true
	I1013 22:11:51.480595  213009 ops.go:34] apiserver oom_adj: -16
	I1013 22:11:51.635758  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:52.136361  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:52.635974  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:53.135974  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:53.635953  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:54.135964  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:54.635933  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:55.135988  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:55.635913  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:55.915738  213009 kubeadm.go:1113] duration metric: took 4.449086874s to wait for elevateKubeSystemPrivileges
	I1013 22:11:55.915826  213009 kubeadm.go:402] duration metric: took 20.034991444s to StartCluster
	I1013 22:11:55.915848  213009 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:55.915912  213009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:11:55.916925  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:55.917169  213009 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:11:55.917373  213009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:11:55.917665  213009 config.go:182] Loaded profile config "auto-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:55.917707  213009 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:11:55.917772  213009 addons.go:69] Setting storage-provisioner=true in profile "auto-122822"
	I1013 22:11:55.917790  213009 addons.go:238] Setting addon storage-provisioner=true in "auto-122822"
	I1013 22:11:55.917811  213009 host.go:66] Checking if "auto-122822" exists ...
	I1013 22:11:55.917831  213009 addons.go:69] Setting default-storageclass=true in profile "auto-122822"
	I1013 22:11:55.917849  213009 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-122822"
	I1013 22:11:55.918150  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:55.921368  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:55.930319  213009 out.go:179] * Verifying Kubernetes components...
	I1013 22:11:55.934255  213009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:11:56.001941  213009 addons.go:238] Setting addon default-storageclass=true in "auto-122822"
	I1013 22:11:56.001993  213009 host.go:66] Checking if "auto-122822" exists ...
	I1013 22:11:56.002423  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:56.022463  213009 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:11:56.026795  213009 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:11:56.026821  213009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:11:56.026885  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:56.063139  213009 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:56.063164  213009 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:11:56.063240  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:56.070978  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:56.098230  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:56.544218  213009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:11:56.544415  213009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:11:56.571980  213009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:11:56.822765  213009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:57.383341  213009 node_ready.go:35] waiting up to 15m0s for node "auto-122822" to be "Ready" ...
	I1013 22:11:57.383662  213009 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 22:11:57.581676  213009 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:11:57.584536  213009 addons.go:514] duration metric: took 1.666815171s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:11:57.888687  213009 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-122822" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.781184263Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5d5d13d5-825f-4111-b705-3bb15f3d3d29 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.784342345Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e423c93f-56f5-4516-8006-0e22abefba45 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.784594515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792166843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792359215Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/751bf86ddcb363c410d8c08adc8f7ef3647e3ad0aacbf1d6702965c54bb39e9e/merged/etc/passwd: no such file or directory"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792384954Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/751bf86ddcb363c410d8c08adc8f7ef3647e3ad0aacbf1d6702965c54bb39e9e/merged/etc/group: no such file or directory"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792681201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.824273611Z" level=info msg="Created container 55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999: kube-system/storage-provisioner/storage-provisioner" id=e423c93f-56f5-4516-8006-0e22abefba45 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.825224782Z" level=info msg="Starting container: 55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999" id=8ffca02b-d290-41e7-9a90-fc314f6016e0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.827158846Z" level=info msg="Started container" PID=1638 containerID=55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999 description=kube-system/storage-provisioner/storage-provisioner id=8ffca02b-d290-41e7-9a90-fc314f6016e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=91cdf3bc4422ae04fac17ffb6be3b1a9e53555f420adf5dc1605c19e8b2171a8
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.754421698Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.757846152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.757994439Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.758069588Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.761995552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.762129719Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.762196219Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.765135107Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.765256031Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.765322047Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.768524485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.76864184Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.768714174Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.772989052Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.773106038Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	55549ec52a2ac       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago       Running             storage-provisioner         2                   91cdf3bc4422a       storage-provisioner                                    kube-system
	abf6fe6a0c2b4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   5251be7afd3b7       dashboard-metrics-scraper-6ffb444bf9-jbcqw             kubernetes-dashboard
	c4bacb88f25bc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago       Running             kubernetes-dashboard        0                   664fc27f5f788       kubernetes-dashboard-855c9754f9-ktrdv                  kubernetes-dashboard
	6a5df3c5a9045       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   e720ebc96b802       kube-proxy-5947n                                       kube-system
	2619fffe3a121       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   d9b916588cee5       coredns-66bc5c9577-vftdh                               kube-system
	91481353c67cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago       Exited              storage-provisioner         1                   91cdf3bc4422a       storage-provisioner                                    kube-system
	b7a49ab1e9406       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   b477c7242cba7       kindnet-xvkwh                                          kube-system
	7f92280ab414d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago       Running             busybox                     1                   5aa54aa3a0552       busybox                                                default
	3970a5fddb4ed       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   eb326ece555a8       kube-scheduler-default-k8s-diff-port-007533            kube-system
	5bbc4021a2610       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   52475e8a17180       kube-apiserver-default-k8s-diff-port-007533            kube-system
	bd56c01842940       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3106f2cf0eeb0       kube-controller-manager-default-k8s-diff-port-007533   kube-system
	99b9c491479a5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   43410061e3737       etcd-default-k8s-diff-port-007533                      kube-system
	
	
	==> coredns [2619fffe3a121a9831056e97ad35ee96fa24908d3db94f825e51faa63ed6a795] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41863 - 27930 "HINFO IN 5218861235792626440.6989432745175703380. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020119972s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-007533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-007533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=default-k8s-diff-port-007533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_09_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:09:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-007533
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:11:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:10:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-007533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 063d00db17b345a69c75216d67066c96
	  System UUID:                31edf4b0-bfde-45c9-96bd-f89ce401d052
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-vftdh                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-007533                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-xvkwh                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-007533             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-007533    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-5947n                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-007533             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jbcqw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ktrdv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-007533 event: Registered Node default-k8s-diff-port-007533 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-007533 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                    node-controller  Node default-k8s-diff-port-007533 event: Registered Node default-k8s-diff-port-007533 in Controller
	
	
	==> dmesg <==
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	[Oct13 22:10] overlayfs: idmapped layers are currently not supported
	[ +26.243538] overlayfs: idmapped layers are currently not supported
	[  +3.497977] overlayfs: idmapped layers are currently not supported
	[Oct13 22:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [99b9c491479a5e957e19a4c1ca9d1a62f9cde3467897c3b831fc01afd815b1f7] <==
	{"level":"warn","ts":"2025-10-13T22:11:04.396317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.427943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.466192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.492944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.552576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.588319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.622747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.680782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.711859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.768324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.778672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.820251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.853477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.903231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.942485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.975740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.011447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.099537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.174270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.211030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.254568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.361117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.496771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32882","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:11:09.795735Z","caller":"traceutil/trace.go:172","msg":"trace[2006571654] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"119.131131ms","start":"2025-10-13T22:11:09.676586Z","end":"2025-10-13T22:11:09.795717Z","steps":["trace[2006571654] 'process raft request'  (duration: 118.864504ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:11:59 up  1:54,  0 user,  load average: 6.07, 4.08, 2.80
	Linux default-k8s-diff-port-007533 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b7a49ab1e9406cec6e4d3573a11414997615cb5773b9431d80fda6e6f6b41fa8] <==
	I1013 22:11:09.600529       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:11:09.600745       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:11:09.600855       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:11:09.600865       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:11:09.600876       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:11:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:11:09.753787       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:11:09.753814       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:11:09.753822       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:11:09.754568       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:11:39.754627       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:11:39.754769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:11:39.754854       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:11:39.754976       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 22:11:40.754378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:11:40.754424       1 metrics.go:72] Registering metrics
	I1013 22:11:40.754485       1 controller.go:711] "Syncing nftables rules"
	I1013 22:11:49.754035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:11:49.754144       1 main.go:301] handling current node
	I1013 22:11:59.754035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:11:59.754065       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5bbc4021a2610d0a72615ef54a61b83477debc9e67e27338b6ebdad10f29a7bb] <==
	I1013 22:11:07.780056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:11:07.780210       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:11:07.784512       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:11:07.784598       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:11:07.784735       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:11:07.785001       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 22:11:07.785069       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 22:11:07.785126       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:11:07.811964       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 22:11:07.823195       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:11:07.835468       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:11:07.835549       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:11:07.835582       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:11:08.021586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1013 22:11:08.047755       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:11:08.396970       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:11:10.293352       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:11:10.629420       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:11:10.800436       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:11:10.865194       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:11:11.120843       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.52.168"}
	I1013 22:11:11.145990       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.165.238"}
	I1013 22:11:13.986242       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:11:14.099181       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:11:14.224625       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bd56c0184294021b9044112e6391397dce68b76fd94d9c861cdd5ada9d399899] <==
	I1013 22:11:13.576120       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:11:13.577278       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:11:13.578259       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:11:13.578310       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:11:13.578340       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:11:13.578492       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:11:13.586130       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:11:13.586622       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:11:13.592361       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:11:13.592476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:11:13.592510       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:11:13.592547       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:11:13.592628       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:11:13.616494       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:11:13.616670       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:11:13.616791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-007533"
	I1013 22:11:13.616874       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:11:13.622465       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:11:13.626776       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:11:13.626949       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 22:11:13.658476       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:13.676408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:13.676501       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:11:13.676533       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:11:14.142034       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [6a5df3c5a9045027560adb2e9d88517dd47a910cecaaaaec5cf2423307ae5e71] <==
	I1013 22:11:10.756589       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:11:10.944373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:11:11.055147       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:11:11.055196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:11:11.055320       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:11:11.205225       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:11:11.205347       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:11:11.210619       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:11:11.210995       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:11:11.211170       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:11.212575       1 config.go:200] "Starting service config controller"
	I1013 22:11:11.212623       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:11:11.212682       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:11:11.212709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:11:11.212779       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:11:11.212806       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:11:11.213685       1 config.go:309] "Starting node config controller"
	I1013 22:11:11.213759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:11:11.213790       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:11:11.312988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:11:11.313194       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:11:11.313217       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3970a5fddb4ed9fafd03a56430ff0a855693ce410e375d5e6f5b23115bdec4fe] <==
	I1013 22:11:05.106406       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:11:10.707743       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:11:10.711456       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:10.764231       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:11:10.764400       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:11:10.764455       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:11:10.764514       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:11:10.766401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:10.766487       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:10.767635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:10.767707       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:10.865436       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:11:10.868548       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:10.868619       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:11:08 default-k8s-diff-port-007533 kubelet[777]: W1013 22:11:08.836164     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/crio-5aa54aa3a055237b702213076d21cde3afa089f28f271e354cf5779833481418 WatchSource:0}: Error finding container 5aa54aa3a055237b702213076d21cde3afa089f28f271e354cf5779833481418: Status 404 returned error can't find the container with id 5aa54aa3a055237b702213076d21cde3afa089f28f271e354cf5779833481418
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215016     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba9e1654-c75c-4cdc-bd62-40572b9c029b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ktrdv\" (UID: \"ba9e1654-c75c-4cdc-bd62-40572b9c029b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktrdv"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215501     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wbt\" (UniqueName: \"kubernetes.io/projected/ba9e1654-c75c-4cdc-bd62-40572b9c029b-kube-api-access-m5wbt\") pod \"kubernetes-dashboard-855c9754f9-ktrdv\" (UID: \"ba9e1654-c75c-4cdc-bd62-40572b9c029b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktrdv"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215614     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc209cd9-0417-4bfd-a13f-de31873f9492-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jbcqw\" (UID: \"fc209cd9-0417-4bfd-a13f-de31873f9492\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215724     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922m4\" (UniqueName: \"kubernetes.io/projected/fc209cd9-0417-4bfd-a13f-de31873f9492-kube-api-access-922m4\") pod \"dashboard-metrics-scraper-6ffb444bf9-jbcqw\" (UID: \"fc209cd9-0417-4bfd-a13f-de31873f9492\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: W1013 22:11:14.477907     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/crio-664fc27f5f788da000b98141f8ab86ed4b55a7fc01be524dd66678952c64336e WatchSource:0}: Error finding container 664fc27f5f788da000b98141f8ab86ed4b55a7fc01be524dd66678952c64336e: Status 404 returned error can't find the container with id 664fc27f5f788da000b98141f8ab86ed4b55a7fc01be524dd66678952c64336e
	Oct 13 22:11:21 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:21.701119     777 scope.go:117] "RemoveContainer" containerID="8e49a86e95a1b0f6987fd075d284be258d3f2e536aa8a903968d03bc97e33600"
	Oct 13 22:11:22 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:22.721002     777 scope.go:117] "RemoveContainer" containerID="8e49a86e95a1b0f6987fd075d284be258d3f2e536aa8a903968d03bc97e33600"
	Oct 13 22:11:22 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:22.723979     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:22 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:22.732372     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:23 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:23.725733     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:23 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:23.725913     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:24 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:24.731805     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:24 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:24.732002     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.328048     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.774521     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.774900     777 scope.go:117] "RemoveContainer" containerID="abf6fe6a0c2b477370ce1b10a59d5afef966b3d2278b2343bb8f29356a375406"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:39.777987     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.815987     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktrdv" podStartSLOduration=12.570722904 podStartE2EDuration="25.815969423s" podCreationTimestamp="2025-10-13 22:11:14 +0000 UTC" firstStartedPulling="2025-10-13 22:11:14.482496197 +0000 UTC m=+16.430526681" lastFinishedPulling="2025-10-13 22:11:27.727742707 +0000 UTC m=+29.675773200" observedRunningTime="2025-10-13 22:11:28.758169225 +0000 UTC m=+30.706199726" watchObservedRunningTime="2025-10-13 22:11:39.815969423 +0000 UTC m=+41.763999908"
	Oct 13 22:11:40 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:40.779053     777 scope.go:117] "RemoveContainer" containerID="91481353c67cc68bb4298db655c0ba872a70547e325a61bce015a48724300e10"
	Oct 13 22:11:44 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:44.412072     777 scope.go:117] "RemoveContainer" containerID="abf6fe6a0c2b477370ce1b10a59d5afef966b3d2278b2343bb8f29356a375406"
	Oct 13 22:11:44 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:44.412916     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:56 default-k8s-diff-port-007533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:11:57 default-k8s-diff-port-007533 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:11:57 default-k8s-diff-port-007533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c4bacb88f25bc5a14376dfc758244b8c2ccbf962e0fd287744b5751bf14025f0] <==
	2025/10/13 22:11:27 Using namespace: kubernetes-dashboard
	2025/10/13 22:11:27 Using in-cluster config to connect to apiserver
	2025/10/13 22:11:27 Using secret token for csrf signing
	2025/10/13 22:11:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:11:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:11:27 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:11:27 Generating JWE encryption key
	2025/10/13 22:11:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:11:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:11:28 Initializing JWE encryption key from synchronized object
	2025/10/13 22:11:28 Creating in-cluster Sidecar client
	2025/10/13 22:11:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:11:28 Serving insecurely on HTTP port: 9090
	2025/10/13 22:11:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:11:27 Starting overwatch
	
	
	==> storage-provisioner [55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999] <==
	I1013 22:11:40.850021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:11:40.863940       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:11:40.864180       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:11:40.867390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:44.322701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:48.582553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:52.180965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:55.235194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:58.257332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:58.262627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:11:58.262974       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:11:58.263189       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-007533_e4cbd55e-d839-415f-951c-75c380b66e03!
	I1013 22:11:58.270736       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a74fbebd-1296-493d-a460-f6003ff9a0e7", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-007533_e4cbd55e-d839-415f-951c-75c380b66e03 became leader
	W1013 22:11:58.274387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:58.283520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:11:58.366273       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-007533_e4cbd55e-d839-415f-951c-75c380b66e03!
	
	
	==> storage-provisioner [91481353c67cc68bb4298db655c0ba872a70547e325a61bce015a48724300e10] <==
	I1013 22:11:10.204889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:11:40.224321       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533: exit status 2 (429.412914ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-007533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-007533
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-007533:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f",
	        "Created": "2025-10-13T22:09:09.643322038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-13T22:10:49.464822718Z",
	            "FinishedAt": "2025-10-13T22:10:48.68038616Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/hosts",
	        "LogPath": "/var/lib/docker/containers/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f-json.log",
	        "Name": "/default-k8s-diff-port-007533",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-007533:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-007533",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f",
	                "LowerDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94-init/diff:/var/lib/docker/overlay2/160e637be34680c755ffc6109c686a7fe5c4dfcba06c9274b3806724dc064518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a110be703c83a69e062725614d21230b1ee1b9bfe56d3879096cfac4be3ae94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-007533",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-007533/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-007533",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-007533",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-007533",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43a204ddcc6f897ab1aa422931fed29e2f2a2b7f1b724af05bb618655af48148",
	            "SandboxKey": "/var/run/docker/netns/43a204ddcc6f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-007533": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:8a:11:e4:c2:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c207adec0a146b3ee3021b2c1eb78ecdd6cde3a3946c5c593fd373dfc1a3d79d",
	                    "EndpointID": "c538c0e4c52446b8f994909f1609277901ad2cfeb3e8ba2463cbd49f9d884665",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-007533",
	                        "42b7859eebb1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533: exit status 2 (382.324677ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-007533 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-007533 logs -n 25: (1.38640609s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-998398 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │                     │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:08 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p no-preload-998398                                                                                                                                                                                                                          │ no-preload-998398            │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p disable-driver-mounts-691681                                                                                                                                                                                                               │ disable-driver-mounts-691681 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:10 UTC │
	│ image   │ embed-certs-251758 image list --format=json                                                                                                                                                                                                   │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ pause   │ -p embed-certs-251758 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │                     │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:09 UTC │ 13 Oct 25 22:09 UTC │
	│ delete  │ -p embed-certs-251758                                                                                                                                                                                                                         │ embed-certs-251758           │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-007533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-007533 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable metrics-server -p newest-cni-400889 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │                     │
	│ stop    │ -p newest-cni-400889 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ addons  │ enable dashboard -p newest-cni-400889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-007533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:10 UTC │
	│ start   │ -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:10 UTC │ 13 Oct 25 22:11 UTC │
	│ image   │ newest-cni-400889 image list --format=json                                                                                                                                                                                                    │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ pause   │ -p newest-cni-400889 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	│ delete  │ -p newest-cni-400889                                                                                                                                                                                                                          │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ delete  │ -p newest-cni-400889                                                                                                                                                                                                                          │ newest-cni-400889            │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ start   │ -p auto-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-122822                  │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	│ image   │ default-k8s-diff-port-007533 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │ 13 Oct 25 22:11 UTC │
	│ pause   │ -p default-k8s-diff-port-007533 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-007533 │ jenkins │ v1.37.0 │ 13 Oct 25 22:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 22:11:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 22:11:18.641464  213009 out.go:360] Setting OutFile to fd 1 ...
	I1013 22:11:18.641578  213009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:18.641583  213009 out.go:374] Setting ErrFile to fd 2...
	I1013 22:11:18.641587  213009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 22:11:18.641925  213009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 22:11:18.642397  213009 out.go:368] Setting JSON to false
	I1013 22:11:18.643318  213009 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6813,"bootTime":1760386666,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 22:11:18.643408  213009 start.go:141] virtualization:  
	I1013 22:11:18.647934  213009 out.go:179] * [auto-122822] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 22:11:18.652898  213009 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 22:11:18.652944  213009 notify.go:220] Checking for updates...
	I1013 22:11:18.661555  213009 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 22:11:18.665039  213009 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:11:18.668424  213009 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 22:11:18.672238  213009 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 22:11:18.675948  213009 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 22:11:18.680216  213009 config.go:182] Loaded profile config "default-k8s-diff-port-007533": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:18.680352  213009 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 22:11:18.716032  213009 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 22:11:18.716148  213009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:11:18.828418  213009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:11:18.814430305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:11:18.828539  213009 docker.go:318] overlay module found
	I1013 22:11:18.833204  213009 out.go:179] * Using the docker driver based on user configuration
	I1013 22:11:18.836511  213009 start.go:305] selected driver: docker
	I1013 22:11:18.836534  213009 start.go:925] validating driver "docker" against <nil>
	I1013 22:11:18.836547  213009 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 22:11:18.837295  213009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 22:11:18.932702  213009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-13 22:11:18.921171572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 22:11:18.932859  213009 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 22:11:18.933158  213009 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1013 22:11:18.936888  213009 out.go:179] * Using Docker driver with root privileges
	I1013 22:11:18.940714  213009 cni.go:84] Creating CNI manager for ""
	I1013 22:11:18.940785  213009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:11:18.940800  213009 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 22:11:18.940879  213009 start.go:349] cluster config:
	{Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1013 22:11:18.944337  213009 out.go:179] * Starting "auto-122822" primary control-plane node in "auto-122822" cluster
	I1013 22:11:18.947512  213009 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 22:11:18.950814  213009 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1013 22:11:18.953931  213009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:11:18.953984  213009 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 22:11:18.954000  213009 cache.go:58] Caching tarball of preloaded images
	I1013 22:11:18.954079  213009 preload.go:233] Found /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1013 22:11:18.954089  213009 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 22:11:18.954199  213009 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/config.json ...
	I1013 22:11:18.954215  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/config.json: {Name:mkddd2f2811ae79867dec424e9f6cd31b3ebf145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:18.954345  213009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 22:11:18.984028  213009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1013 22:11:18.984047  213009 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1013 22:11:18.984065  213009 cache.go:232] Successfully downloaded all kic artifacts
	I1013 22:11:18.984088  213009 start.go:360] acquireMachinesLock for auto-122822: {Name:mka0d1339a97472877dad96588ce1f47613d1d53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1013 22:11:18.984185  213009 start.go:364] duration metric: took 81.705µs to acquireMachinesLock for "auto-122822"
	I1013 22:11:18.984210  213009 start.go:93] Provisioning new machine with config: &{Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:11:18.984280  213009 start.go:125] createHost starting for "" (driver="docker")
	W1013 22:11:16.246881  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:18.254581  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:18.989121  213009 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1013 22:11:18.989331  213009 start.go:159] libmachine.API.Create for "auto-122822" (driver="docker")
	I1013 22:11:18.989367  213009 client.go:168] LocalClient.Create starting
	I1013 22:11:18.989430  213009 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem
	I1013 22:11:18.989461  213009 main.go:141] libmachine: Decoding PEM data...
	I1013 22:11:18.989482  213009 main.go:141] libmachine: Parsing certificate...
	I1013 22:11:18.989540  213009 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem
	I1013 22:11:18.989557  213009 main.go:141] libmachine: Decoding PEM data...
	I1013 22:11:18.989567  213009 main.go:141] libmachine: Parsing certificate...
	I1013 22:11:18.989906  213009 cli_runner.go:164] Run: docker network inspect auto-122822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1013 22:11:19.010616  213009 cli_runner.go:211] docker network inspect auto-122822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1013 22:11:19.010700  213009 network_create.go:284] running [docker network inspect auto-122822] to gather additional debugging logs...
	I1013 22:11:19.010722  213009 cli_runner.go:164] Run: docker network inspect auto-122822
	W1013 22:11:19.032524  213009 cli_runner.go:211] docker network inspect auto-122822 returned with exit code 1
	I1013 22:11:19.032559  213009 network_create.go:287] error running [docker network inspect auto-122822]: docker network inspect auto-122822: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-122822 not found
	I1013 22:11:19.032572  213009 network_create.go:289] output of [docker network inspect auto-122822]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-122822 not found
	
	** /stderr **
	I1013 22:11:19.032678  213009 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:11:19.056983  213009 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
	I1013 22:11:19.057327  213009 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-524c3512c6b6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:88:a1:02:e0:8e} reservation:<nil>}
	I1013 22:11:19.057644  213009 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2d17b8b5c002 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:ca:29:7e:1f:a0} reservation:<nil>}
	I1013 22:11:19.057883  213009 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c207adec0a14 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:30:41:df:49:ee} reservation:<nil>}
	I1013 22:11:19.058261  213009 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a629a0}
	I1013 22:11:19.058282  213009 network_create.go:124] attempt to create docker network auto-122822 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1013 22:11:19.058344  213009 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-122822 auto-122822
	I1013 22:11:19.111264  213009 network_create.go:108] docker network auto-122822 192.168.85.0/24 created
	I1013 22:11:19.111291  213009 kic.go:121] calculated static IP "192.168.85.2" for the "auto-122822" container
	I1013 22:11:19.111362  213009 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1013 22:11:19.135936  213009 cli_runner.go:164] Run: docker volume create auto-122822 --label name.minikube.sigs.k8s.io=auto-122822 --label created_by.minikube.sigs.k8s.io=true
	I1013 22:11:19.153756  213009 oci.go:103] Successfully created a docker volume auto-122822
	I1013 22:11:19.153833  213009 cli_runner.go:164] Run: docker run --rm --name auto-122822-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-122822 --entrypoint /usr/bin/test -v auto-122822:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1013 22:11:19.937998  213009 oci.go:107] Successfully prepared a docker volume auto-122822
	I1013 22:11:19.938051  213009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:11:19.938071  213009 kic.go:194] Starting extracting preloaded images to volume ...
	I1013 22:11:19.938151  213009 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-122822:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1013 22:11:20.744028  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:22.749499  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:25.345591  213009 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-122822:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.407405914s)
	I1013 22:11:25.345632  213009 kic.go:203] duration metric: took 5.407557877s to extract preloaded images to volume ...
	W1013 22:11:25.345776  213009 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1013 22:11:25.345888  213009 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1013 22:11:25.437937  213009 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-122822 --name auto-122822 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-122822 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-122822 --network auto-122822 --ip 192.168.85.2 --volume auto-122822:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1013 22:11:25.929344  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Running}}
	I1013 22:11:25.953523  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:25.988849  213009 cli_runner.go:164] Run: docker exec auto-122822 stat /var/lib/dpkg/alternatives/iptables
	I1013 22:11:26.057082  213009 oci.go:144] the created container "auto-122822" has a running status.
	I1013 22:11:26.057120  213009 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa...
	I1013 22:11:26.704836  213009 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1013 22:11:26.728568  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:26.752121  213009 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1013 22:11:26.752220  213009 kic_runner.go:114] Args: [docker exec --privileged auto-122822 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1013 22:11:26.821844  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:26.847070  213009 machine.go:93] provisionDockerMachine start ...
	I1013 22:11:26.847154  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:26.877586  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:26.877912  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:26.877923  213009 main.go:141] libmachine: About to run SSH command:
	hostname
	I1013 22:11:26.882739  213009 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1013 22:11:24.772310  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:27.243733  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:30.032904  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-122822
	
	I1013 22:11:30.032930  213009 ubuntu.go:182] provisioning hostname "auto-122822"
	I1013 22:11:30.033006  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:30.063553  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:30.063899  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:30.063915  213009 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-122822 && echo "auto-122822" | sudo tee /etc/hostname
	I1013 22:11:30.230112  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-122822
	
	I1013 22:11:30.230183  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:30.254884  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:30.255194  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:30.255216  213009 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-122822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-122822/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-122822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1013 22:11:30.399907  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1013 22:11:30.399933  213009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-2495/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-2495/.minikube}
	I1013 22:11:30.399952  213009 ubuntu.go:190] setting up certificates
	I1013 22:11:30.399961  213009 provision.go:84] configureAuth start
	I1013 22:11:30.400017  213009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-122822
	I1013 22:11:30.418873  213009 provision.go:143] copyHostCerts
	I1013 22:11:30.418949  213009 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem, removing ...
	I1013 22:11:30.418963  213009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem
	I1013 22:11:30.419043  213009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/ca.pem (1082 bytes)
	I1013 22:11:30.419142  213009 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem, removing ...
	I1013 22:11:30.419154  213009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem
	I1013 22:11:30.419181  213009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/cert.pem (1123 bytes)
	I1013 22:11:30.419235  213009 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem, removing ...
	I1013 22:11:30.419244  213009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem
	I1013 22:11:30.419269  213009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-2495/.minikube/key.pem (1675 bytes)
	I1013 22:11:30.419322  213009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem org=jenkins.auto-122822 san=[127.0.0.1 192.168.85.2 auto-122822 localhost minikube]
	I1013 22:11:31.111667  213009 provision.go:177] copyRemoteCerts
	I1013 22:11:31.111753  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1013 22:11:31.111853  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.131573  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.235483  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1013 22:11:31.256482  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1013 22:11:31.275515  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1013 22:11:31.294388  213009 provision.go:87] duration metric: took 894.404674ms to configureAuth
	I1013 22:11:31.294450  213009 ubuntu.go:206] setting minikube options for container-runtime
	I1013 22:11:31.294666  213009 config.go:182] Loaded profile config "auto-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:31.294800  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.311489  213009 main.go:141] libmachine: Using SSH client type: native
	I1013 22:11:31.311839  213009 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1013 22:11:31.311861  213009 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1013 22:11:31.570527  213009 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1013 22:11:31.570547  213009 machine.go:96] duration metric: took 4.723458388s to provisionDockerMachine
	I1013 22:11:31.570557  213009 client.go:171] duration metric: took 12.581184171s to LocalClient.Create
	I1013 22:11:31.570575  213009 start.go:167] duration metric: took 12.58124579s to libmachine.API.Create "auto-122822"
	I1013 22:11:31.570583  213009 start.go:293] postStartSetup for "auto-122822" (driver="docker")
	I1013 22:11:31.570593  213009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1013 22:11:31.570659  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1013 22:11:31.570701  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.588588  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.691651  213009 ssh_runner.go:195] Run: cat /etc/os-release
	I1013 22:11:31.695058  213009 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1013 22:11:31.695088  213009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1013 22:11:31.695100  213009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/addons for local assets ...
	I1013 22:11:31.695170  213009 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-2495/.minikube/files for local assets ...
	I1013 22:11:31.695280  213009 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem -> 42992.pem in /etc/ssl/certs
	I1013 22:11:31.695418  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1013 22:11:31.703265  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:11:31.726367  213009 start.go:296] duration metric: took 155.769671ms for postStartSetup
	I1013 22:11:31.726718  213009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-122822
	I1013 22:11:31.752523  213009 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/config.json ...
	I1013 22:11:31.752791  213009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 22:11:31.752843  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.770384  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.868686  213009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1013 22:11:31.873100  213009 start.go:128] duration metric: took 12.888806197s to createHost
	I1013 22:11:31.873121  213009 start.go:83] releasing machines lock for "auto-122822", held for 12.888928031s
	I1013 22:11:31.873190  213009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-122822
	I1013 22:11:31.891887  213009 ssh_runner.go:195] Run: cat /version.json
	I1013 22:11:31.891944  213009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1013 22:11:31.891954  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.892010  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:31.915854  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:31.924098  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:32.109612  213009 ssh_runner.go:195] Run: systemctl --version
	I1013 22:11:32.117312  213009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1013 22:11:32.156270  213009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1013 22:11:32.160562  213009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1013 22:11:32.160657  213009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1013 22:11:32.190824  213009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1013 22:11:32.190885  213009 start.go:495] detecting cgroup driver to use...
	I1013 22:11:32.190940  213009 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1013 22:11:32.191024  213009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1013 22:11:32.213190  213009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1013 22:11:32.226666  213009 docker.go:218] disabling cri-docker service (if available) ...
	I1013 22:11:32.226729  213009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1013 22:11:32.249418  213009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1013 22:11:32.266520  213009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1013 22:11:32.400517  213009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1013 22:11:32.531125  213009 docker.go:234] disabling docker service ...
	I1013 22:11:32.531219  213009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1013 22:11:32.552370  213009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1013 22:11:32.567361  213009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1013 22:11:32.697991  213009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1013 22:11:32.829095  213009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1013 22:11:32.842442  213009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1013 22:11:32.857901  213009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1013 22:11:32.857986  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.866903  213009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1013 22:11:32.867009  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.876649  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.885501  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.894890  213009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1013 22:11:32.903144  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.911989  213009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.925823  213009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1013 22:11:32.934579  213009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1013 22:11:32.942282  213009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1013 22:11:32.956236  213009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:11:33.073387  213009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1013 22:11:33.217357  213009 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1013 22:11:33.217479  213009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1013 22:11:33.221677  213009 start.go:563] Will wait 60s for crictl version
	I1013 22:11:33.221823  213009 ssh_runner.go:195] Run: which crictl
	I1013 22:11:33.226216  213009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1013 22:11:33.259560  213009 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1013 22:11:33.259725  213009 ssh_runner.go:195] Run: crio --version
	I1013 22:11:33.295264  213009 ssh_runner.go:195] Run: crio --version
	I1013 22:11:33.328328  213009 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1013 22:11:33.331157  213009 cli_runner.go:164] Run: docker network inspect auto-122822 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1013 22:11:33.347565  213009 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1013 22:11:33.351552  213009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:11:33.360941  213009 kubeadm.go:883] updating cluster {Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1013 22:11:33.361056  213009 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 22:11:33.361120  213009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:11:33.393457  213009 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:11:33.393481  213009 crio.go:433] Images already preloaded, skipping extraction
	I1013 22:11:33.393532  213009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1013 22:11:33.419481  213009 crio.go:514] all images are preloaded for cri-o runtime.
	I1013 22:11:33.419506  213009 cache_images.go:85] Images are preloaded, skipping loading
	I1013 22:11:33.419521  213009 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1013 22:11:33.419615  213009 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-122822 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1013 22:11:33.419715  213009 ssh_runner.go:195] Run: crio config
	I1013 22:11:33.480554  213009 cni.go:84] Creating CNI manager for ""
	I1013 22:11:33.480578  213009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:11:33.480600  213009 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1013 22:11:33.480623  213009 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-122822 NodeName:auto-122822 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1013 22:11:33.480750  213009 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-122822"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1013 22:11:33.480818  213009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1013 22:11:33.488465  213009 binaries.go:44] Found k8s binaries, skipping transfer
	I1013 22:11:33.488580  213009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1013 22:11:33.496032  213009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1013 22:11:33.509870  213009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1013 22:11:33.522758  213009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1013 22:11:33.535505  213009 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1013 22:11:33.538816  213009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1013 22:11:33.548436  213009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1013 22:11:29.742543  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:31.745582  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:33.749276  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:33.670479  213009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:11:33.687503  213009 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822 for IP: 192.168.85.2
	I1013 22:11:33.687522  213009 certs.go:195] generating shared ca certs ...
	I1013 22:11:33.687541  213009 certs.go:227] acquiring lock for ca certs: {Name:mk2386d3847709c1fe7ff4ab092e2e3fd8551167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:33.687691  213009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key
	I1013 22:11:33.687729  213009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key
	I1013 22:11:33.687736  213009 certs.go:257] generating profile certs ...
	I1013 22:11:33.687847  213009 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.key
	I1013 22:11:33.687868  213009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt with IP's: []
	I1013 22:11:34.102970  213009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt ...
	I1013 22:11:34.103006  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: {Name:mk35c08afb3d37df981ceacf86559e2e7099c846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.103245  213009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.key ...
	I1013 22:11:34.103265  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.key: {Name:mk604b8114fe0926b3be098ec32c6b552a0cba5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.103393  213009 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f
	I1013 22:11:34.103414  213009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1013 22:11:34.620299  213009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f ...
	I1013 22:11:34.620333  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f: {Name:mk14740cfd52891948b9ab2ec8d503d0c00264eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.620528  213009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f ...
	I1013 22:11:34.620544  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f: {Name:mka5fb5f70d766bcab1695323b9758ddfa229912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:34.620629  213009 certs.go:382] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt.b531f30f -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt
	I1013 22:11:34.620706  213009 certs.go:386] copying /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key.b531f30f -> /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key
	I1013 22:11:34.620769  213009 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key
	I1013 22:11:34.620787  213009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt with IP's: []
	I1013 22:11:35.448282  213009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt ...
	I1013 22:11:35.448314  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt: {Name:mk78fed5d1f2512d84e91354fec660186bec6c00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:35.448509  213009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key ...
	I1013 22:11:35.448522  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key: {Name:mk248c97bad9b465f89aeabe0eda4c2b67d3cddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:35.448720  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem (1338 bytes)
	W1013 22:11:35.448763  213009 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299_empty.pem, impossibly tiny 0 bytes
	I1013 22:11:35.448777  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca-key.pem (1679 bytes)
	I1013 22:11:35.448803  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/ca.pem (1082 bytes)
	I1013 22:11:35.448829  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/cert.pem (1123 bytes)
	I1013 22:11:35.448859  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/certs/key.pem (1675 bytes)
	I1013 22:11:35.448904  213009 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem (1708 bytes)
	I1013 22:11:35.449563  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1013 22:11:35.468050  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1013 22:11:35.488555  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1013 22:11:35.510558  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1013 22:11:35.529628  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1013 22:11:35.549851  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1013 22:11:35.569164  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1013 22:11:35.587514  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1013 22:11:35.605219  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/ssl/certs/42992.pem --> /usr/share/ca-certificates/42992.pem (1708 bytes)
	I1013 22:11:35.623205  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1013 22:11:35.642438  213009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-2495/.minikube/certs/4299.pem --> /usr/share/ca-certificates/4299.pem (1338 bytes)
	I1013 22:11:35.660646  213009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1013 22:11:35.673634  213009 ssh_runner.go:195] Run: openssl version
	I1013 22:11:35.680113  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42992.pem && ln -fs /usr/share/ca-certificates/42992.pem /etc/ssl/certs/42992.pem"
	I1013 22:11:35.688621  213009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42992.pem
	I1013 22:11:35.692827  213009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 13 21:06 /usr/share/ca-certificates/42992.pem
	I1013 22:11:35.692895  213009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42992.pem
	I1013 22:11:35.735994  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42992.pem /etc/ssl/certs/3ec20f2e.0"
	I1013 22:11:35.746142  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1013 22:11:35.754292  213009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:11:35.758671  213009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 13 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:11:35.758777  213009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1013 22:11:35.799994  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1013 22:11:35.808204  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4299.pem && ln -fs /usr/share/ca-certificates/4299.pem /etc/ssl/certs/4299.pem"
	I1013 22:11:35.816966  213009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4299.pem
	I1013 22:11:35.820864  213009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 13 21:06 /usr/share/ca-certificates/4299.pem
	I1013 22:11:35.820962  213009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4299.pem
	I1013 22:11:35.863219  213009 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4299.pem /etc/ssl/certs/51391683.0"
	I1013 22:11:35.875654  213009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1013 22:11:35.880738  213009 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1013 22:11:35.880819  213009 kubeadm.go:400] StartCluster: {Name:auto-122822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-122822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 22:11:35.880899  213009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1013 22:11:35.880975  213009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1013 22:11:35.908354  213009 cri.go:89] found id: ""
	I1013 22:11:35.908489  213009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1013 22:11:35.916073  213009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1013 22:11:35.924630  213009 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1013 22:11:35.924709  213009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1013 22:11:35.936266  213009 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1013 22:11:35.936287  213009 kubeadm.go:157] found existing configuration files:
	
	I1013 22:11:35.936336  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1013 22:11:35.948118  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1013 22:11:35.948181  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1013 22:11:35.956720  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1013 22:11:35.965789  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1013 22:11:35.965853  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1013 22:11:35.975468  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1013 22:11:35.985689  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1013 22:11:35.985751  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1013 22:11:35.994003  213009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1013 22:11:36.001699  213009 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1013 22:11:36.001806  213009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1013 22:11:36.012827  213009 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1013 22:11:36.063426  213009 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1013 22:11:36.063516  213009 kubeadm.go:318] [preflight] Running pre-flight checks
	I1013 22:11:36.091598  213009 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1013 22:11:36.091706  213009 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1013 22:11:36.091773  213009 kubeadm.go:318] OS: Linux
	I1013 22:11:36.091891  213009 kubeadm.go:318] CGROUPS_CPU: enabled
	I1013 22:11:36.091962  213009 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1013 22:11:36.092030  213009 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1013 22:11:36.092101  213009 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1013 22:11:36.092178  213009 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1013 22:11:36.092258  213009 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1013 22:11:36.092330  213009 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1013 22:11:36.092402  213009 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1013 22:11:36.092473  213009 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1013 22:11:36.163542  213009 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1013 22:11:36.163693  213009 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1013 22:11:36.163839  213009 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1013 22:11:36.172357  213009 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1013 22:11:36.178480  213009 out.go:252]   - Generating certificates and keys ...
	I1013 22:11:36.178582  213009 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1013 22:11:36.178656  213009 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1013 22:11:36.864525  213009 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1013 22:11:36.923267  213009 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1013 22:11:36.982704  213009 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1013 22:11:37.671066  213009 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1013 22:11:38.287529  213009 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1013 22:11:38.287882  213009 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-122822 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1013 22:11:38.501602  213009 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1013 22:11:38.501920  213009 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-122822 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1013 22:11:36.246344  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	W1013 22:11:38.743340  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:38.677585  213009 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1013 22:11:38.937290  213009 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1013 22:11:39.063703  213009 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1013 22:11:39.064022  213009 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1013 22:11:39.253856  213009 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1013 22:11:39.701462  213009 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1013 22:11:40.640966  213009 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1013 22:11:41.092788  213009 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1013 22:11:41.509040  213009 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1013 22:11:41.510014  213009 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1013 22:11:41.512903  213009 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1013 22:11:40.744147  208589 pod_ready.go:104] pod "coredns-66bc5c9577-vftdh" is not "Ready", error: <nil>
	I1013 22:11:41.743855  208589 pod_ready.go:94] pod "coredns-66bc5c9577-vftdh" is "Ready"
	I1013 22:11:41.743880  208589 pod_ready.go:86] duration metric: took 30.006264171s for pod "coredns-66bc5c9577-vftdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.746617  208589 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.751309  208589 pod_ready.go:94] pod "etcd-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:41.751332  208589 pod_ready.go:86] duration metric: took 4.686387ms for pod "etcd-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.753787  208589 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.758595  208589 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:41.758636  208589 pod_ready.go:86] duration metric: took 4.812841ms for pod "kube-apiserver-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.761123  208589 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:41.941517  208589 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:41.941545  208589 pod_ready.go:86] duration metric: took 180.401197ms for pod "kube-controller-manager-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:42.153498  208589 pod_ready.go:83] waiting for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:42.542024  208589 pod_ready.go:94] pod "kube-proxy-5947n" is "Ready"
	I1013 22:11:42.542048  208589 pod_ready.go:86] duration metric: took 388.517012ms for pod "kube-proxy-5947n" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:42.743603  208589 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:43.142288  208589 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-007533" is "Ready"
	I1013 22:11:43.142380  208589 pod_ready.go:86] duration metric: took 398.748447ms for pod "kube-scheduler-default-k8s-diff-port-007533" in "kube-system" namespace to be "Ready" or be gone ...
	I1013 22:11:43.142404  208589 pod_ready.go:40] duration metric: took 31.410791404s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1013 22:11:43.198834  208589 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1013 22:11:43.202288  208589 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-007533" cluster and "default" namespace by default
	I1013 22:11:41.516615  213009 out.go:252]   - Booting up control plane ...
	I1013 22:11:41.516721  213009 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1013 22:11:41.516803  213009 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1013 22:11:41.516873  213009 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1013 22:11:41.532227  213009 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1013 22:11:41.532619  213009 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1013 22:11:41.540999  213009 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1013 22:11:41.541378  213009 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1013 22:11:41.541632  213009 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1013 22:11:41.680257  213009 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1013 22:11:41.680386  213009 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1013 22:11:43.179947  213009 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501782364s
	I1013 22:11:43.188295  213009 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1013 22:11:43.188395  213009 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1013 22:11:43.189604  213009 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1013 22:11:43.189701  213009 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1013 22:11:47.103419  213009 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.913884435s
	I1013 22:11:49.218368  213009 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.029085359s
	I1013 22:11:49.692767  213009 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502164102s
	I1013 22:11:49.728741  213009 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1013 22:11:49.750493  213009 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1013 22:11:49.774635  213009 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1013 22:11:49.774854  213009 kubeadm.go:318] [mark-control-plane] Marking the node auto-122822 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1013 22:11:49.788093  213009 kubeadm.go:318] [bootstrap-token] Using token: z4rcal.1hgvybjffvqspgx8
	I1013 22:11:49.790945  213009 out.go:252]   - Configuring RBAC rules ...
	I1013 22:11:49.791081  213009 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1013 22:11:49.797725  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1013 22:11:49.805980  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1013 22:11:49.814654  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1013 22:11:49.819693  213009 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1013 22:11:49.824467  213009 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1013 22:11:50.100686  213009 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1013 22:11:50.530156  213009 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1013 22:11:51.098974  213009 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1013 22:11:51.100325  213009 kubeadm.go:318] 
	I1013 22:11:51.100407  213009 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1013 22:11:51.100417  213009 kubeadm.go:318] 
	I1013 22:11:51.100508  213009 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1013 22:11:51.100518  213009 kubeadm.go:318] 
	I1013 22:11:51.100565  213009 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1013 22:11:51.100643  213009 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1013 22:11:51.100698  213009 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1013 22:11:51.100710  213009 kubeadm.go:318] 
	I1013 22:11:51.100777  213009 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1013 22:11:51.100782  213009 kubeadm.go:318] 
	I1013 22:11:51.100838  213009 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1013 22:11:51.100843  213009 kubeadm.go:318] 
	I1013 22:11:51.100898  213009 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1013 22:11:51.100977  213009 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1013 22:11:51.101050  213009 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1013 22:11:51.101054  213009 kubeadm.go:318] 
	I1013 22:11:51.101142  213009 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1013 22:11:51.101224  213009 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1013 22:11:51.101229  213009 kubeadm.go:318] 
	I1013 22:11:51.101322  213009 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z4rcal.1hgvybjffvqspgx8 \
	I1013 22:11:51.101433  213009 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 \
	I1013 22:11:51.101454  213009 kubeadm.go:318] 	--control-plane 
	I1013 22:11:51.101459  213009 kubeadm.go:318] 
	I1013 22:11:51.101580  213009 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1013 22:11:51.101602  213009 kubeadm.go:318] 
	I1013 22:11:51.101714  213009 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z4rcal.1hgvybjffvqspgx8 \
	I1013 22:11:51.101829  213009 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:8246abc30e5ad4f73e0c5665e9f98b5f472397f3707b313971a0405cce9e4e60 
	I1013 22:11:51.105573  213009 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1013 22:11:51.105822  213009 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1013 22:11:51.105943  213009 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1013 22:11:51.105964  213009 cni.go:84] Creating CNI manager for ""
	I1013 22:11:51.105980  213009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 22:11:51.109118  213009 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1013 22:11:51.112084  213009 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1013 22:11:51.116426  213009 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1013 22:11:51.116449  213009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1013 22:11:51.132432  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1013 22:11:51.466568  213009 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1013 22:11:51.466695  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:51.466781  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-122822 minikube.k8s.io/updated_at=2025_10_13T22_11_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801 minikube.k8s.io/name=auto-122822 minikube.k8s.io/primary=true
	I1013 22:11:51.480595  213009 ops.go:34] apiserver oom_adj: -16
	I1013 22:11:51.635758  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:52.136361  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:52.635974  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:53.135974  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:53.635953  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:54.135964  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:54.635933  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:55.135988  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:55.635913  213009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1013 22:11:55.915738  213009 kubeadm.go:1113] duration metric: took 4.449086874s to wait for elevateKubeSystemPrivileges
	I1013 22:11:55.915826  213009 kubeadm.go:402] duration metric: took 20.034991444s to StartCluster
	I1013 22:11:55.915848  213009 settings.go:142] acquiring lock: {Name:mk4a4b065845724eb9b4bb1832a39a02e57dd066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:55.915912  213009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 22:11:55.916925  213009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/kubeconfig: {Name:mke211bdcf461494c5205a242d94f4d44bbce10e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 22:11:55.917169  213009 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1013 22:11:55.917373  213009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1013 22:11:55.917665  213009 config.go:182] Loaded profile config "auto-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 22:11:55.917707  213009 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1013 22:11:55.917772  213009 addons.go:69] Setting storage-provisioner=true in profile "auto-122822"
	I1013 22:11:55.917790  213009 addons.go:238] Setting addon storage-provisioner=true in "auto-122822"
	I1013 22:11:55.917811  213009 host.go:66] Checking if "auto-122822" exists ...
	I1013 22:11:55.917831  213009 addons.go:69] Setting default-storageclass=true in profile "auto-122822"
	I1013 22:11:55.917849  213009 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-122822"
	I1013 22:11:55.918150  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:55.921368  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:55.930319  213009 out.go:179] * Verifying Kubernetes components...
	I1013 22:11:55.934255  213009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1013 22:11:56.001941  213009 addons.go:238] Setting addon default-storageclass=true in "auto-122822"
	I1013 22:11:56.001993  213009 host.go:66] Checking if "auto-122822" exists ...
	I1013 22:11:56.002423  213009 cli_runner.go:164] Run: docker container inspect auto-122822 --format={{.State.Status}}
	I1013 22:11:56.022463  213009 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1013 22:11:56.026795  213009 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:11:56.026821  213009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1013 22:11:56.026885  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:56.063139  213009 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:56.063164  213009 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1013 22:11:56.063240  213009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-122822
	I1013 22:11:56.070978  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:56.098230  213009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/auto-122822/id_rsa Username:docker}
	I1013 22:11:56.544218  213009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1013 22:11:56.544415  213009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1013 22:11:56.571980  213009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1013 22:11:56.822765  213009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1013 22:11:57.383341  213009 node_ready.go:35] waiting up to 15m0s for node "auto-122822" to be "Ready" ...
	I1013 22:11:57.383662  213009 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1013 22:11:57.581676  213009 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1013 22:11:57.584536  213009 addons.go:514] duration metric: took 1.666815171s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1013 22:11:57.888687  213009 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-122822" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.781184263Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5d5d13d5-825f-4111-b705-3bb15f3d3d29 name=/runtime.v1.ImageService/ImageStatus
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.784342345Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e423c93f-56f5-4516-8006-0e22abefba45 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.784594515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792166843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792359215Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/751bf86ddcb363c410d8c08adc8f7ef3647e3ad0aacbf1d6702965c54bb39e9e/merged/etc/passwd: no such file or directory"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792384954Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/751bf86ddcb363c410d8c08adc8f7ef3647e3ad0aacbf1d6702965c54bb39e9e/merged/etc/group: no such file or directory"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.792681201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.824273611Z" level=info msg="Created container 55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999: kube-system/storage-provisioner/storage-provisioner" id=e423c93f-56f5-4516-8006-0e22abefba45 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.825224782Z" level=info msg="Starting container: 55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999" id=8ffca02b-d290-41e7-9a90-fc314f6016e0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 13 22:11:40 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:40.827158846Z" level=info msg="Started container" PID=1638 containerID=55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999 description=kube-system/storage-provisioner/storage-provisioner id=8ffca02b-d290-41e7-9a90-fc314f6016e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=91cdf3bc4422ae04fac17ffb6be3b1a9e53555f420adf5dc1605c19e8b2171a8
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.754421698Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.757846152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.757994439Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.758069588Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.761995552Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.762129719Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.762196219Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.765135107Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.765256031Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.765322047Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.768524485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.76864184Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.768714174Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.772989052Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 13 22:11:49 default-k8s-diff-port-007533 crio[652]: time="2025-10-13T22:11:49.773106038Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	55549ec52a2ac       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   91cdf3bc4422a       storage-provisioner                                    kube-system
	abf6fe6a0c2b4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   5251be7afd3b7       dashboard-metrics-scraper-6ffb444bf9-jbcqw             kubernetes-dashboard
	c4bacb88f25bc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   664fc27f5f788       kubernetes-dashboard-855c9754f9-ktrdv                  kubernetes-dashboard
	6a5df3c5a9045       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   e720ebc96b802       kube-proxy-5947n                                       kube-system
	2619fffe3a121       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   d9b916588cee5       coredns-66bc5c9577-vftdh                               kube-system
	91481353c67cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   91cdf3bc4422a       storage-provisioner                                    kube-system
	b7a49ab1e9406       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   b477c7242cba7       kindnet-xvkwh                                          kube-system
	7f92280ab414d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   5aa54aa3a0552       busybox                                                default
	3970a5fddb4ed       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   eb326ece555a8       kube-scheduler-default-k8s-diff-port-007533            kube-system
	5bbc4021a2610       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   52475e8a17180       kube-apiserver-default-k8s-diff-port-007533            kube-system
	bd56c01842940       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3106f2cf0eeb0       kube-controller-manager-default-k8s-diff-port-007533   kube-system
	99b9c491479a5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   43410061e3737       etcd-default-k8s-diff-port-007533                      kube-system
	
	
	==> coredns [2619fffe3a121a9831056e97ad35ee96fa24908d3db94f825e51faa63ed6a795] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41863 - 27930 "HINFO IN 5218861235792626440.6989432745175703380. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020119972s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-007533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-007533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d66ff63385795e7745a92b3d96cb54f5b977801
	                    minikube.k8s.io/name=default-k8s-diff-port-007533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_13T22_09_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 13 Oct 2025 22:09:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-007533
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 13 Oct 2025 22:11:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:09:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 13 Oct 2025 22:11:39 +0000   Mon, 13 Oct 2025 22:10:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-007533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 063d00db17b345a69c75216d67066c96
	  System UUID:                31edf4b0-bfde-45c9-96bd-f89ce401d052
	  Boot ID:                    d306204e-e74c-4697-9887-c9f3c96cc083
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-vftdh                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-default-k8s-diff-port-007533                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-xvkwh                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-007533             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-007533    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-5947n                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-007533             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jbcqw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ktrdv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s                  node-controller  Node default-k8s-diff-port-007533 event: Registered Node default-k8s-diff-port-007533 in Controller
	  Normal   NodeReady                100s                   kubelet          Node default-k8s-diff-port-007533 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-007533 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node default-k8s-diff-port-007533 event: Registered Node default-k8s-diff-port-007533 in Controller
	
	
	==> dmesg <==
	[Oct13 21:43] overlayfs: idmapped layers are currently not supported
	[ +17.500139] overlayfs: idmapped layers are currently not supported
	[Oct13 21:44] overlayfs: idmapped layers are currently not supported
	[ +25.978359] overlayfs: idmapped layers are currently not supported
	[Oct13 21:46] overlayfs: idmapped layers are currently not supported
	[Oct13 21:47] overlayfs: idmapped layers are currently not supported
	[Oct13 21:49] overlayfs: idmapped layers are currently not supported
	[Oct13 21:50] overlayfs: idmapped layers are currently not supported
	[Oct13 21:51] overlayfs: idmapped layers are currently not supported
	[Oct13 21:53] overlayfs: idmapped layers are currently not supported
	[Oct13 21:54] overlayfs: idmapped layers are currently not supported
	[Oct13 21:55] overlayfs: idmapped layers are currently not supported
	[Oct13 22:02] overlayfs: idmapped layers are currently not supported
	[Oct13 22:04] overlayfs: idmapped layers are currently not supported
	[ +37.438407] overlayfs: idmapped layers are currently not supported
	[Oct13 22:05] overlayfs: idmapped layers are currently not supported
	[Oct13 22:06] overlayfs: idmapped layers are currently not supported
	[Oct13 22:07] overlayfs: idmapped layers are currently not supported
	[ +29.672836] overlayfs: idmapped layers are currently not supported
	[Oct13 22:08] overlayfs: idmapped layers are currently not supported
	[Oct13 22:09] overlayfs: idmapped layers are currently not supported
	[Oct13 22:10] overlayfs: idmapped layers are currently not supported
	[ +26.243538] overlayfs: idmapped layers are currently not supported
	[  +3.497977] overlayfs: idmapped layers are currently not supported
	[Oct13 22:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [99b9c491479a5e957e19a4c1ca9d1a62f9cde3467897c3b831fc01afd815b1f7] <==
	{"level":"warn","ts":"2025-10-13T22:11:04.396317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.427943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.466192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.492944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.552576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.588319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.622747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.680782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.711859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.768324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.778672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.820251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.853477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.903231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.942485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:04.975740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.011447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.099537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.174270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.211030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.254568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.361117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-13T22:11:05.496771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32882","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-13T22:11:09.795735Z","caller":"traceutil/trace.go:172","msg":"trace[2006571654] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"119.131131ms","start":"2025-10-13T22:11:09.676586Z","end":"2025-10-13T22:11:09.795717Z","steps":["trace[2006571654] 'process raft request'  (duration: 118.864504ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:12:02 up  1:54,  0 user,  load average: 6.30, 4.16, 2.83
	Linux default-k8s-diff-port-007533 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b7a49ab1e9406cec6e4d3573a11414997615cb5773b9431d80fda6e6f6b41fa8] <==
	I1013 22:11:09.600529       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1013 22:11:09.600745       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1013 22:11:09.600855       1 main.go:148] setting mtu 1500 for CNI 
	I1013 22:11:09.600865       1 main.go:178] kindnetd IP family: "ipv4"
	I1013 22:11:09.600876       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-13T22:11:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1013 22:11:09.753787       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1013 22:11:09.753814       1 controller.go:381] "Waiting for informer caches to sync"
	I1013 22:11:09.753822       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1013 22:11:09.754568       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1013 22:11:39.754627       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1013 22:11:39.754769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1013 22:11:39.754854       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1013 22:11:39.754976       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1013 22:11:40.754378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1013 22:11:40.754424       1 metrics.go:72] Registering metrics
	I1013 22:11:40.754485       1 controller.go:711] "Syncing nftables rules"
	I1013 22:11:49.754035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:11:49.754144       1 main.go:301] handling current node
	I1013 22:11:59.754035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1013 22:11:59.754065       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5bbc4021a2610d0a72615ef54a61b83477debc9e67e27338b6ebdad10f29a7bb] <==
	I1013 22:11:07.780056       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1013 22:11:07.780210       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1013 22:11:07.784512       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1013 22:11:07.784598       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1013 22:11:07.784735       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1013 22:11:07.785001       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1013 22:11:07.785069       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1013 22:11:07.785126       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1013 22:11:07.811964       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1013 22:11:07.823195       1 aggregator.go:171] initial CRD sync complete...
	I1013 22:11:07.835468       1 autoregister_controller.go:144] Starting autoregister controller
	I1013 22:11:07.835549       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1013 22:11:07.835582       1 cache.go:39] Caches are synced for autoregister controller
	I1013 22:11:08.021586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1013 22:11:08.047755       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1013 22:11:08.396970       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1013 22:11:10.293352       1 controller.go:667] quota admission added evaluator for: namespaces
	I1013 22:11:10.629420       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1013 22:11:10.800436       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1013 22:11:10.865194       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1013 22:11:11.120843       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.52.168"}
	I1013 22:11:11.145990       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.165.238"}
	I1013 22:11:13.986242       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1013 22:11:14.099181       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1013 22:11:14.224625       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bd56c0184294021b9044112e6391397dce68b76fd94d9c861cdd5ada9d399899] <==
	I1013 22:11:13.576120       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1013 22:11:13.577278       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1013 22:11:13.578259       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1013 22:11:13.578310       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1013 22:11:13.578340       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1013 22:11:13.578492       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1013 22:11:13.586130       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1013 22:11:13.586622       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1013 22:11:13.592361       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1013 22:11:13.592476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1013 22:11:13.592510       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1013 22:11:13.592547       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1013 22:11:13.592628       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1013 22:11:13.616494       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1013 22:11:13.616670       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1013 22:11:13.616791       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-007533"
	I1013 22:11:13.616874       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1013 22:11:13.622465       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1013 22:11:13.626776       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1013 22:11:13.626949       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1013 22:11:13.658476       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:13.676408       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1013 22:11:13.676501       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1013 22:11:13.676533       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1013 22:11:14.142034       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [6a5df3c5a9045027560adb2e9d88517dd47a910cecaaaaec5cf2423307ae5e71] <==
	I1013 22:11:10.756589       1 server_linux.go:53] "Using iptables proxy"
	I1013 22:11:10.944373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1013 22:11:11.055147       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1013 22:11:11.055196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1013 22:11:11.055320       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1013 22:11:11.205225       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1013 22:11:11.205347       1 server_linux.go:132] "Using iptables Proxier"
	I1013 22:11:11.210619       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1013 22:11:11.210995       1 server.go:527] "Version info" version="v1.34.1"
	I1013 22:11:11.211170       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:11.212575       1 config.go:200] "Starting service config controller"
	I1013 22:11:11.212623       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1013 22:11:11.212682       1 config.go:106] "Starting endpoint slice config controller"
	I1013 22:11:11.212709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1013 22:11:11.212779       1 config.go:403] "Starting serviceCIDR config controller"
	I1013 22:11:11.212806       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1013 22:11:11.213685       1 config.go:309] "Starting node config controller"
	I1013 22:11:11.213759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1013 22:11:11.213790       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1013 22:11:11.312988       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1013 22:11:11.313194       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1013 22:11:11.313217       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3970a5fddb4ed9fafd03a56430ff0a855693ce410e375d5e6f5b23115bdec4fe] <==
	I1013 22:11:05.106406       1 serving.go:386] Generated self-signed cert in-memory
	I1013 22:11:10.707743       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1013 22:11:10.711456       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1013 22:11:10.764231       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1013 22:11:10.764400       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1013 22:11:10.764455       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1013 22:11:10.764514       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1013 22:11:10.766401       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:10.766487       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1013 22:11:10.767635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:10.767707       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:10.865436       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1013 22:11:10.868548       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1013 22:11:10.868619       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 13 22:11:08 default-k8s-diff-port-007533 kubelet[777]: W1013 22:11:08.836164     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/crio-5aa54aa3a055237b702213076d21cde3afa089f28f271e354cf5779833481418 WatchSource:0}: Error finding container 5aa54aa3a055237b702213076d21cde3afa089f28f271e354cf5779833481418: Status 404 returned error can't find the container with id 5aa54aa3a055237b702213076d21cde3afa089f28f271e354cf5779833481418
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215016     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba9e1654-c75c-4cdc-bd62-40572b9c029b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ktrdv\" (UID: \"ba9e1654-c75c-4cdc-bd62-40572b9c029b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktrdv"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215501     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wbt\" (UniqueName: \"kubernetes.io/projected/ba9e1654-c75c-4cdc-bd62-40572b9c029b-kube-api-access-m5wbt\") pod \"kubernetes-dashboard-855c9754f9-ktrdv\" (UID: \"ba9e1654-c75c-4cdc-bd62-40572b9c029b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktrdv"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215614     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc209cd9-0417-4bfd-a13f-de31873f9492-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jbcqw\" (UID: \"fc209cd9-0417-4bfd-a13f-de31873f9492\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:14.215724     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922m4\" (UniqueName: \"kubernetes.io/projected/fc209cd9-0417-4bfd-a13f-de31873f9492-kube-api-access-922m4\") pod \"dashboard-metrics-scraper-6ffb444bf9-jbcqw\" (UID: \"fc209cd9-0417-4bfd-a13f-de31873f9492\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw"
	Oct 13 22:11:14 default-k8s-diff-port-007533 kubelet[777]: W1013 22:11:14.477907     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/42b7859eebb1b617062a071a3162eafb492415d4bedb857080058ccb1421015f/crio-664fc27f5f788da000b98141f8ab86ed4b55a7fc01be524dd66678952c64336e WatchSource:0}: Error finding container 664fc27f5f788da000b98141f8ab86ed4b55a7fc01be524dd66678952c64336e: Status 404 returned error can't find the container with id 664fc27f5f788da000b98141f8ab86ed4b55a7fc01be524dd66678952c64336e
	Oct 13 22:11:21 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:21.701119     777 scope.go:117] "RemoveContainer" containerID="8e49a86e95a1b0f6987fd075d284be258d3f2e536aa8a903968d03bc97e33600"
	Oct 13 22:11:22 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:22.721002     777 scope.go:117] "RemoveContainer" containerID="8e49a86e95a1b0f6987fd075d284be258d3f2e536aa8a903968d03bc97e33600"
	Oct 13 22:11:22 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:22.723979     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:22 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:22.732372     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:23 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:23.725733     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:23 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:23.725913     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:24 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:24.731805     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:24 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:24.732002     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.328048     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.774521     777 scope.go:117] "RemoveContainer" containerID="16076c96ca4175f14eaeb700c0d0cba0fb3975d7ff1b6aa9aea477d14ff7ffe7"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.774900     777 scope.go:117] "RemoveContainer" containerID="abf6fe6a0c2b477370ce1b10a59d5afef966b3d2278b2343bb8f29356a375406"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:39.777987     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:39 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:39.815987     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ktrdv" podStartSLOduration=12.570722904 podStartE2EDuration="25.815969423s" podCreationTimestamp="2025-10-13 22:11:14 +0000 UTC" firstStartedPulling="2025-10-13 22:11:14.482496197 +0000 UTC m=+16.430526681" lastFinishedPulling="2025-10-13 22:11:27.727742707 +0000 UTC m=+29.675773200" observedRunningTime="2025-10-13 22:11:28.758169225 +0000 UTC m=+30.706199726" watchObservedRunningTime="2025-10-13 22:11:39.815969423 +0000 UTC m=+41.763999908"
	Oct 13 22:11:40 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:40.779053     777 scope.go:117] "RemoveContainer" containerID="91481353c67cc68bb4298db655c0ba872a70547e325a61bce015a48724300e10"
	Oct 13 22:11:44 default-k8s-diff-port-007533 kubelet[777]: I1013 22:11:44.412072     777 scope.go:117] "RemoveContainer" containerID="abf6fe6a0c2b477370ce1b10a59d5afef966b3d2278b2343bb8f29356a375406"
	Oct 13 22:11:44 default-k8s-diff-port-007533 kubelet[777]: E1013 22:11:44.412916     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jbcqw_kubernetes-dashboard(fc209cd9-0417-4bfd-a13f-de31873f9492)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jbcqw" podUID="fc209cd9-0417-4bfd-a13f-de31873f9492"
	Oct 13 22:11:56 default-k8s-diff-port-007533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 13 22:11:57 default-k8s-diff-port-007533 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 13 22:11:57 default-k8s-diff-port-007533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c4bacb88f25bc5a14376dfc758244b8c2ccbf962e0fd287744b5751bf14025f0] <==
	2025/10/13 22:11:27 Starting overwatch
	2025/10/13 22:11:27 Using namespace: kubernetes-dashboard
	2025/10/13 22:11:27 Using in-cluster config to connect to apiserver
	2025/10/13 22:11:27 Using secret token for csrf signing
	2025/10/13 22:11:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/13 22:11:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/13 22:11:27 Successful initial request to the apiserver, version: v1.34.1
	2025/10/13 22:11:27 Generating JWE encryption key
	2025/10/13 22:11:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/13 22:11:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/13 22:11:28 Initializing JWE encryption key from synchronized object
	2025/10/13 22:11:28 Creating in-cluster Sidecar client
	2025/10/13 22:11:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/13 22:11:28 Serving insecurely on HTTP port: 9090
	2025/10/13 22:11:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [55549ec52a2ac020ce56c1f9974b4fd36115996f322fd5e802bb928b4087a999] <==
	I1013 22:11:40.850021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1013 22:11:40.863940       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1013 22:11:40.864180       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1013 22:11:40.867390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:44.322701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:48.582553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:52.180965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:55.235194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:58.257332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:58.262627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:11:58.262974       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1013 22:11:58.263189       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-007533_e4cbd55e-d839-415f-951c-75c380b66e03!
	I1013 22:11:58.270736       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a74fbebd-1296-493d-a460-f6003ff9a0e7", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-007533_e4cbd55e-d839-415f-951c-75c380b66e03 became leader
	W1013 22:11:58.274387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:11:58.283520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1013 22:11:58.366273       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-007533_e4cbd55e-d839-415f-951c-75c380b66e03!
	W1013 22:12:00.289755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:12:00.300065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:12:02.304167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1013 22:12:02.313810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [91481353c67cc68bb4298db655c0ba872a70547e325a61bce015a48724300e10] <==
	I1013 22:11:10.204889       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1013 22:11:40.224321       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533: exit status 2 (376.240073ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-007533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.08s)
E1013 22:17:40.296315    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:40.302732    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:40.314187    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:40.335644    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:40.377023    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:40.458427    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:40.620032    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:40.941731    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:41.583929    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:42.866102    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:45.427483    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:50.549487    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:17:52.932229    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:18:00.791908    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:18:09.640018    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (258/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 43.73
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 10.11
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 178.19
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 10.14
48 TestAddons/StoppedEnableDisable 12.2
49 TestCertOptions 36.49
50 TestCertExpiration 234.96
59 TestErrorSpam/setup 27.86
60 TestErrorSpam/start 0.76
61 TestErrorSpam/status 1.08
62 TestErrorSpam/pause 7.03
63 TestErrorSpam/unpause 5.58
64 TestErrorSpam/stop 1.42
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 74.81
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 30.45
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
76 TestFunctional/serial/CacheCmd/cache/add_local 1.12
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 36.98
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.52
87 TestFunctional/serial/LogsFileCmd 1.45
88 TestFunctional/serial/InvalidService 4.46
90 TestFunctional/parallel/ConfigCmd 0.49
91 TestFunctional/parallel/DashboardCmd 10.98
92 TestFunctional/parallel/DryRun 0.61
93 TestFunctional/parallel/InternationalLanguage 0.27
94 TestFunctional/parallel/StatusCmd 1.01
99 TestFunctional/parallel/AddonsCmd 0.43
100 TestFunctional/parallel/PersistentVolumeClaim 25.08
102 TestFunctional/parallel/SSHCmd 0.7
103 TestFunctional/parallel/CpCmd 2.25
105 TestFunctional/parallel/FileSync 0.39
106 TestFunctional/parallel/CertSync 2.21
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
114 TestFunctional/parallel/License 0.34
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
128 TestFunctional/parallel/ProfileCmd/profile_list 0.41
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
130 TestFunctional/parallel/MountCmd/any-port 7.05
131 TestFunctional/parallel/MountCmd/specific-port 1.74
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
133 TestFunctional/parallel/ServiceCmd/List 0.63
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 1.31
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
145 TestFunctional/parallel/ImageCommands/Setup 0.65
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 203.13
164 TestMultiControlPlane/serial/DeployApp 6.29
165 TestMultiControlPlane/serial/PingHostFromPods 1.42
166 TestMultiControlPlane/serial/AddWorkerNode 56.82
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1
169 TestMultiControlPlane/serial/CopyFile 19.27
170 TestMultiControlPlane/serial/StopSecondaryNode 12.67
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
172 TestMultiControlPlane/serial/RestartSecondaryNode 35.11
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.2
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 127.14
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.65
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
177 TestMultiControlPlane/serial/StopCluster 35.81
178 TestMultiControlPlane/serial/RestartCluster 75.15
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
180 TestMultiControlPlane/serial/AddSecondaryNode 84.14
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
185 TestJSONOutput/start/Command 78.67
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.69
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 47.22
211 TestKicCustomNetwork/use_default_bridge_network 36.53
212 TestKicExistingNetwork 38.22
213 TestKicCustomSubnet 35.49
214 TestKicStaticIP 39.89
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 75.32
219 TestMountStart/serial/StartWithMountFirst 9.44
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.45
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.61
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.21
226 TestMountStart/serial/RestartStopped 7.79
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 139.13
231 TestMultiNode/serial/DeployApp2Nodes 5.26
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 55.26
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.19
237 TestMultiNode/serial/StopNode 2.28
238 TestMultiNode/serial/StartAfterStop 8.09
239 TestMultiNode/serial/RestartKeepsNodes 78.33
240 TestMultiNode/serial/DeleteNode 5.63
241 TestMultiNode/serial/StopMultiNode 23.72
242 TestMultiNode/serial/RestartMultiNode 47.09
243 TestMultiNode/serial/ValidateNameConflict 37.05
248 TestPreload 132.43
250 TestScheduledStopUnix 110.38
253 TestInsufficientStorage 11.65
254 TestRunningBinaryUpgrade 60.14
256 TestKubernetesUpgrade 211.33
257 TestMissingContainerUpgrade 122.56
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 43.08
261 TestNoKubernetes/serial/StartWithStopK8s 8.32
262 TestNoKubernetes/serial/Start 9.54
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
264 TestNoKubernetes/serial/ProfileList 1.18
265 TestNoKubernetes/serial/Stop 1.24
266 TestNoKubernetes/serial/StartNoArgs 7.59
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
268 TestStoppedBinaryUpgrade/Setup 1.75
269 TestStoppedBinaryUpgrade/Upgrade 60.14
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
279 TestPause/serial/Start 83.21
280 TestPause/serial/SecondStartNoReconfiguration 26.92
289 TestNetworkPlugins/group/false 3.57
294 TestStartStop/group/old-k8s-version/serial/FirstStart 62.58
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
297 TestStartStop/group/old-k8s-version/serial/Stop 11.84
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
299 TestStartStop/group/old-k8s-version/serial/SecondStart 52.94
301 TestStartStop/group/no-preload/serial/FirstStart 75.26
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.16
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
307 TestStartStop/group/embed-certs/serial/FirstStart 83.16
308 TestStartStop/group/no-preload/serial/DeployApp 9.4
310 TestStartStop/group/no-preload/serial/Stop 11.94
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
312 TestStartStop/group/no-preload/serial/SecondStart 50.24
313 TestStartStop/group/embed-certs/serial/DeployApp 9.34
315 TestStartStop/group/embed-certs/serial/Stop 11.93
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
319 TestStartStop/group/embed-certs/serial/SecondStart 51.98
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.12
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
329 TestStartStop/group/newest-cni/serial/FirstStart 42.03
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.43
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
333 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/Stop 1.21
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
337 TestStartStop/group/newest-cni/serial/SecondStart 22.1
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.54
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
344 TestNetworkPlugins/group/auto/Start 81.04
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.41
349 TestNetworkPlugins/group/kindnet/Start 80.91
350 TestNetworkPlugins/group/auto/KubeletFlags 0.35
351 TestNetworkPlugins/group/auto/NetCatPod 12.36
352 TestNetworkPlugins/group/auto/DNS 0.15
353 TestNetworkPlugins/group/auto/Localhost 0.13
354 TestNetworkPlugins/group/auto/HairPin 0.13
355 TestNetworkPlugins/group/calico/Start 63.1
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.31
359 TestNetworkPlugins/group/kindnet/DNS 0.21
360 TestNetworkPlugins/group/kindnet/Localhost 0.18
361 TestNetworkPlugins/group/kindnet/HairPin 0.18
362 TestNetworkPlugins/group/custom-flannel/Start 59.66
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.38
365 TestNetworkPlugins/group/calico/NetCatPod 11.36
366 TestNetworkPlugins/group/calico/DNS 0.24
367 TestNetworkPlugins/group/calico/Localhost 0.2
368 TestNetworkPlugins/group/calico/HairPin 0.22
369 TestNetworkPlugins/group/enable-default-cni/Start 78.02
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.38
372 TestNetworkPlugins/group/custom-flannel/DNS 0.19
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
375 TestNetworkPlugins/group/flannel/Start 63.15
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
381 TestNetworkPlugins/group/bridge/Start 83.65
382 TestNetworkPlugins/group/flannel/ControllerPod 6
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
384 TestNetworkPlugins/group/flannel/NetCatPod 11.36
385 TestNetworkPlugins/group/flannel/DNS 0.21
386 TestNetworkPlugins/group/flannel/Localhost 0.16
387 TestNetworkPlugins/group/flannel/HairPin 0.15
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 10.25
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (43.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-422444 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-422444 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (43.732924903s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (43.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1013 20:58:38.937769    4299 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1013 20:58:38.937844    4299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-422444
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-422444: exit status 85 (94.558181ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-422444 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-422444 │ jenkins │ v1.37.0 │ 13 Oct 25 20:57 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 20:57:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 20:57:55.251575    4304 out.go:360] Setting OutFile to fd 1 ...
	I1013 20:57:55.251772    4304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:57:55.251830    4304 out.go:374] Setting ErrFile to fd 2...
	I1013 20:57:55.251850    4304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:57:55.252108    4304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	W1013 20:57:55.252271    4304 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21724-2495/.minikube/config/config.json: open /home/jenkins/minikube-integration/21724-2495/.minikube/config/config.json: no such file or directory
	I1013 20:57:55.252701    4304 out.go:368] Setting JSON to true
	I1013 20:57:55.253472    4304 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2410,"bootTime":1760386666,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 20:57:55.253568    4304 start.go:141] virtualization:  
	I1013 20:57:55.257625    4304 out.go:99] [download-only-422444] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1013 20:57:55.257788    4304 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball: no such file or directory
	I1013 20:57:55.257855    4304 notify.go:220] Checking for updates...
	I1013 20:57:55.260708    4304 out.go:171] MINIKUBE_LOCATION=21724
	I1013 20:57:55.263770    4304 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 20:57:55.266804    4304 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 20:57:55.269624    4304 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 20:57:55.272623    4304 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1013 20:57:55.278273    4304 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 20:57:55.278529    4304 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 20:57:55.317278    4304 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 20:57:55.317429    4304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:57:55.729384    4304 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-13 20:57:55.720319047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:57:55.729488    4304 docker.go:318] overlay module found
	I1013 20:57:55.732560    4304 out.go:99] Using the docker driver based on user configuration
	I1013 20:57:55.732597    4304 start.go:305] selected driver: docker
	I1013 20:57:55.732610    4304 start.go:925] validating driver "docker" against <nil>
	I1013 20:57:55.732726    4304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:57:55.788102    4304 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-13 20:57:55.779849858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:57:55.788252    4304 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 20:57:55.788549    4304 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1013 20:57:55.788720    4304 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 20:57:55.791733    4304 out.go:171] Using Docker driver with root privileges
	I1013 20:57:55.794717    4304 cni.go:84] Creating CNI manager for ""
	I1013 20:57:55.794798    4304 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:57:55.794811    4304 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 20:57:55.794891    4304 start.go:349] cluster config:
	{Name:download-only-422444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-422444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 20:57:55.797704    4304 out.go:99] Starting "download-only-422444" primary control-plane node in "download-only-422444" cluster
	I1013 20:57:55.797723    4304 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 20:57:55.800446    4304 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1013 20:57:55.800483    4304 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 20:57:55.800509    4304 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 20:57:55.816589    4304 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 20:57:55.816758    4304 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1013 20:57:55.816862    4304 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 20:57:55.854386    4304 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1013 20:57:55.854421    4304 cache.go:58] Caching tarball of preloaded images
	I1013 20:57:55.854576    4304 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1013 20:57:55.857834    4304 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1013 20:57:55.857862    4304 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1013 20:57:55.944312    4304 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1013 20:57:55.944476    4304 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1013 20:58:01.050495    4304 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	
	
	* The control-plane node download-only-422444 host does not exist
	  To start a cluster, run: "minikube start -p download-only-422444"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-422444
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-923308 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-923308 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.110533349s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1013 20:58:49.490827    4299 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1013 20:58:49.490871    4299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-923308
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-923308: exit status 85 (78.038572ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-422444 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-422444 │ jenkins │ v1.37.0 │ 13 Oct 25 20:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ delete  │ -p download-only-422444                                                                                                                                                   │ download-only-422444 │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │ 13 Oct 25 20:58 UTC │
	│ start   │ -o=json --download-only -p download-only-923308 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-923308 │ jenkins │ v1.37.0 │ 13 Oct 25 20:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/13 20:58:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1013 20:58:39.423840    4505 out.go:360] Setting OutFile to fd 1 ...
	I1013 20:58:39.424035    4505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:58:39.424061    4505 out.go:374] Setting ErrFile to fd 2...
	I1013 20:58:39.424081    4505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 20:58:39.424369    4505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 20:58:39.424801    4505 out.go:368] Setting JSON to true
	I1013 20:58:39.425554    4505 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2454,"bootTime":1760386666,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 20:58:39.425639    4505 start.go:141] virtualization:  
	I1013 20:58:39.428994    4505 out.go:99] [download-only-923308] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 20:58:39.429236    4505 notify.go:220] Checking for updates...
	I1013 20:58:39.432043    4505 out.go:171] MINIKUBE_LOCATION=21724
	I1013 20:58:39.434900    4505 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 20:58:39.437690    4505 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 20:58:39.440467    4505 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 20:58:39.443361    4505 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1013 20:58:39.449096    4505 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1013 20:58:39.449373    4505 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 20:58:39.474750    4505 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 20:58:39.474901    4505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:58:39.528183    4505 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-13 20:58:39.519588732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:58:39.528293    4505 docker.go:318] overlay module found
	I1013 20:58:39.531272    4505 out.go:99] Using the docker driver based on user configuration
	I1013 20:58:39.531310    4505 start.go:305] selected driver: docker
	I1013 20:58:39.531322    4505 start.go:925] validating driver "docker" against <nil>
	I1013 20:58:39.531435    4505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 20:58:39.587364    4505 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-13 20:58:39.57889246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 20:58:39.587532    4505 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1013 20:58:39.587835    4505 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1013 20:58:39.588012    4505 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1013 20:58:39.591089    4505 out.go:171] Using Docker driver with root privileges
	I1013 20:58:39.593942    4505 cni.go:84] Creating CNI manager for ""
	I1013 20:58:39.594007    4505 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1013 20:58:39.594019    4505 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1013 20:58:39.594097    4505 start.go:349] cluster config:
	{Name:download-only-923308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-923308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 20:58:39.597033    4505 out.go:99] Starting "download-only-923308" primary control-plane node in "download-only-923308" cluster
	I1013 20:58:39.597066    4505 cache.go:123] Beginning downloading kic base image for docker with crio
	I1013 20:58:39.599950    4505 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1013 20:58:39.599989    4505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:58:39.600086    4505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1013 20:58:39.619315    4505 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1013 20:58:39.619447    4505 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1013 20:58:39.619465    4505 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1013 20:58:39.619470    4505 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1013 20:58:39.619477    4505 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1013 20:58:39.654736    4505 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 20:58:39.654765    4505 cache.go:58] Caching tarball of preloaded images
	I1013 20:58:39.654935    4505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:58:39.657949    4505 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1013 20:58:39.657979    4505 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1013 20:58:39.747446    4505 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1013 20:58:39.747502    4505 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1013 20:58:48.753990    4505 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1013 20:58:48.754366    4505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/download-only-923308/config.json ...
	I1013 20:58:48.754400    4505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/download-only-923308/config.json: {Name:mk843f9648cc47446295655f1913873910e75aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1013 20:58:48.754581    4505 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1013 20:58:48.754822    4505 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21724-2495/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-923308 host does not exist
	  To start a cluster, run: "minikube start -p download-only-923308"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-923308
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1013 20:58:50.627745    4299 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-313294 --alsologtostderr --binary-mirror http://127.0.0.1:46681 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-313294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-313294
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-421494
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-421494: exit status 85 (72.734619ms)

                                                
                                                
-- stdout --
	* Profile "addons-421494" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-421494"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-421494
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-421494: exit status 85 (66.930594ms)

                                                
                                                
-- stdout --
	* Profile "addons-421494" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-421494"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (178.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-421494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-421494 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m58.190781385s)
--- PASS: TestAddons/Setup (178.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-421494 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-421494 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-421494 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-421494 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [58c554e2-c4a8-4349-82de-ed03d9667aad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [58c554e2-c4a8-4349-82de-ed03d9667aad] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004147632s
addons_test.go:694: (dbg) Run:  kubectl --context addons-421494 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-421494 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-421494 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-421494 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-421494
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-421494: (11.928893519s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-421494
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-421494
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-421494
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (36.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-194931 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1013 22:03:51.249514    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-194931 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.802094646s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-194931 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-194931 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-194931 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-194931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-194931
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-194931: (1.961287438s)
--- PASS: TestCertOptions (36.49s)

                                                
                                    
x
+
TestCertExpiration (234.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-546667 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (32.564291922s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-546667 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.99202561s)
helpers_test.go:175: Cleaning up "cert-expiration-546667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-546667
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-546667: (2.405955283s)
--- PASS: TestCertExpiration (234.96s)

                                                
                                    
x
+
TestErrorSpam/setup (27.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-719517 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-719517 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-719517 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-719517 --driver=docker  --container-runtime=crio: (27.862822517s)
--- PASS: TestErrorSpam/setup (27.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (7.03s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause: exit status 80 (2.425829018s)

                                                
                                                
-- stdout --
	* Pausing node nospam-719517 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:05:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause: exit status 80 (2.318153901s)

                                                
                                                
-- stdout --
	* Pausing node nospam-719517 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:05:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause: exit status 80 (2.28425579s)

                                                
                                                
-- stdout --
	* Pausing node nospam-719517 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:05:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.03s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause: exit status 80 (1.801543142s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-719517 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:06:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause: exit status 80 (1.591452335s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-719517 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:06:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause: exit status 80 (2.190880189s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-719517 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-13T21:06:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.58s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 stop: (1.211554584s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-719517 --log_dir /tmp/nospam-719517 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21724-2495/.minikube/files/etc/test/nested/copy/4299/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-192425 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1013 21:06:50.826445    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:50.833164    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:50.844592    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:50.866063    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:50.907449    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:50.988942    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:51.150891    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:51.472511    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:52.114424    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:53.395977    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:06:55.957776    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:07:01.079845    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:07:11.321346    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-192425 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m14.809417311s)
--- PASS: TestFunctional/serial/StartWithProxy (74.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1013 21:07:26.897524    4299 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-192425 --alsologtostderr -v=8
E1013 21:07:31.802900    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-192425 --alsologtostderr -v=8: (30.440903621s)
functional_test.go:678: soft start took 30.447186547s for "functional-192425" cluster.
I1013 21:07:57.338730    4299 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (30.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-192425 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 cache add registry.k8s.io/pause:3.1: (1.186890028s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 cache add registry.k8s.io/pause:3.3: (1.109513192s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 cache add registry.k8s.io/pause:latest: (1.258359544s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-192425 /tmp/TestFunctionalserialCacheCmdcacheadd_local2483011282/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cache add minikube-local-cache-test:functional-192425
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cache delete minikube-local-cache-test:functional-192425
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-192425
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.642672ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 kubectl -- --context functional-192425 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-192425 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-192425 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1013 21:08:12.765017    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-192425 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.976540038s)
functional_test.go:776: restart took 36.976635003s for "functional-192425" cluster.
I1013 21:08:41.788165    4299 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-192425 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 logs: (1.519233619s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 logs --file /tmp/TestFunctionalserialLogsFileCmd2642299975/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 logs --file /tmp/TestFunctionalserialLogsFileCmd2642299975/001/logs.txt: (1.444820576s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-192425 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-192425
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-192425: exit status 115 (395.135506ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32217 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-192425 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 config get cpus: exit status 14 (85.868155ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 config get cpus: exit status 14 (80.765188ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-192425 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-192425 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 30766: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-192425 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-192425 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (269.548251ms)

                                                
                                                
-- stdout --
	* [functional-192425] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:19:17.172519   30248 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:19:17.172674   30248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:19:17.172686   30248 out.go:374] Setting ErrFile to fd 2...
	I1013 21:19:17.172691   30248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:19:17.173119   30248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:19:17.173668   30248 out.go:368] Setting JSON to false
	I1013 21:19:17.174632   30248 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3692,"bootTime":1760386666,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:19:17.174737   30248 start.go:141] virtualization:  
	I1013 21:19:17.177821   30248 out.go:179] * [functional-192425] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:19:17.181575   30248 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:19:17.181791   30248 notify.go:220] Checking for updates...
	I1013 21:19:17.187356   30248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:19:17.190154   30248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:19:17.192985   30248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:19:17.196172   30248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:19:17.199583   30248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:19:17.203246   30248 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:17.203959   30248 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:19:17.236549   30248 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:19:17.236661   30248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:19:17.357658   30248 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:19:17.347525179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:19:17.357761   30248 docker.go:318] overlay module found
	I1013 21:19:17.360866   30248 out.go:179] * Using the docker driver based on existing profile
	I1013 21:19:17.364381   30248 start.go:305] selected driver: docker
	I1013 21:19:17.364398   30248 start.go:925] validating driver "docker" against &{Name:functional-192425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-192425 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:19:17.364491   30248 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:19:17.368024   30248 out.go:203] 
	W1013 21:19:17.371465   30248 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1013 21:19:17.374684   30248 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-192425 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-192425 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-192425 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (271.528463ms)

                                                
                                                
-- stdout --
	* [functional-192425] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:19:16.896682   30171 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:19:16.897007   30171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:19:16.897038   30171 out.go:374] Setting ErrFile to fd 2...
	I1013 21:19:16.897061   30171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:19:16.898128   30171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:19:16.898559   30171 out.go:368] Setting JSON to false
	I1013 21:19:16.899413   30171 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3691,"bootTime":1760386666,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:19:16.899501   30171 start.go:141] virtualization:  
	I1013 21:19:16.903212   30171 out.go:179] * [functional-192425] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1013 21:19:16.906359   30171 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:19:16.906494   30171 notify.go:220] Checking for updates...
	I1013 21:19:16.913272   30171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:19:16.916202   30171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:19:16.919098   30171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:19:16.922048   30171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:19:16.924881   30171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:19:16.928408   30171 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:19:16.928947   30171 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:19:16.989412   30171 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:19:16.989535   30171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:19:17.085062   30171 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:19:17.071957888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:19:17.085160   30171 docker.go:318] overlay module found
	I1013 21:19:17.088192   30171 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1013 21:19:17.091023   30171 start.go:305] selected driver: docker
	I1013 21:19:17.091038   30171 start.go:925] validating driver "docker" against &{Name:functional-192425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-192425 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1013 21:19:17.091139   30171 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:19:17.094632   30171 out.go:203] 
	W1013 21:19:17.097519   30171 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1013 21:19:17.100836   30171 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b81b5df4-be3a-4a60-9ea0-1a280df69068] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003271109s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-192425 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-192425 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-192425 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-192425 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ed7aeb14-b2dc-434d-b4a0-9fb807c695c5] Pending
helpers_test.go:352: "sp-pod" [ed7aeb14-b2dc-434d-b4a0-9fb807c695c5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ed7aeb14-b2dc-434d-b4a0-9fb807c695c5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003257259s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-192425 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-192425 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-192425 delete -f testdata/storage-provisioner/pod.yaml: (1.170618016s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-192425 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b0296e03-e6b6-424d-8d0c-4c7a5ae88ff2] Pending
helpers_test.go:352: "sp-pod" [b0296e03-e6b6-424d-8d0c-4c7a5ae88ff2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003672647s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-192425 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh -n functional-192425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cp functional-192425:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd64708362/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh -n functional-192425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh -n functional-192425 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4299/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo cat /etc/test/nested/copy/4299/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4299.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo cat /etc/ssl/certs/4299.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4299.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo cat /usr/share/ca-certificates/4299.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/42992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo cat /etc/ssl/certs/42992.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/42992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo cat /usr/share/ca-certificates/42992.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-192425 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh "sudo systemctl is-active docker": exit status 1 (355.306024ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh "sudo systemctl is-active containerd": exit status 1 (327.463256ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-192425 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-192425 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-192425 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-192425 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26900: os: process already finished
helpers_test.go:525: unable to kill pid 26690: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-192425 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-192425 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d28f7277-56c4-4315-a93f-fd3853c005cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d28f7277-56c4-4315-a93f-fd3853c005cf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00289693s
I1013 21:08:59.688089    4299 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-192425 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.154.56 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-192425 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "353.902143ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.145684ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "359.642084ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.204058ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdany-port158789324/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760390345155518949" to /tmp/TestFunctionalparallelMountCmdany-port158789324/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760390345155518949" to /tmp/TestFunctionalparallelMountCmdany-port158789324/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760390345155518949" to /tmp/TestFunctionalparallelMountCmdany-port158789324/001/test-1760390345155518949
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.984822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:19:05.498487    4299 retry.go:31] will retry after 678.128018ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 13 21:19 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 13 21:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 13 21:19 test-1760390345155518949
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh cat /mount-9p/test-1760390345155518949
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-192425 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [46879d13-1440-4740-838c-fe924f473a1f] Pending
helpers_test.go:352: "busybox-mount" [46879d13-1440-4740-838c-fe924f473a1f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [46879d13-1440-4740-838c-fe924f473a1f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [46879d13-1440-4740-838c-fe924f473a1f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003552253s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-192425 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdany-port158789324/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdspecific-port432795955/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.833569ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:19:12.563396    4299 retry.go:31] will retry after 355.943551ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdspecific-port432795955/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh "sudo umount -f /mount-9p": exit status 1 (287.113503ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-192425 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdspecific-port432795955/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3241921751/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3241921751/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3241921751/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T" /mount1: exit status 1 (589.624958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1013 21:19:14.538285    4299 retry.go:31] will retry after 408.02435ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-192425 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3241921751/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3241921751/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-192425 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3241921751/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 service list -o json
functional_test.go:1504: Took "605.549528ms" to run "out/minikube-linux-arm64 -p functional-192425 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 version -o=json --components: (1.305229119s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-192425 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-192425 image ls --format short --alsologtostderr:
I1013 21:19:31.926429   32881 out.go:360] Setting OutFile to fd 1 ...
I1013 21:19:31.926597   32881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:31.926608   32881 out.go:374] Setting ErrFile to fd 2...
I1013 21:19:31.926615   32881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:31.926878   32881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
I1013 21:19:31.927526   32881 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:31.927643   32881 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:31.928189   32881 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
I1013 21:19:31.956686   32881 ssh_runner.go:195] Run: systemctl --version
I1013 21:19:31.956742   32881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
I1013 21:19:31.976217   32881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
I1013 21:19:32.087375   32881 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-192425 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-192425 image ls --format table --alsologtostderr:
I1013 21:19:32.955357   33144 out.go:360] Setting OutFile to fd 1 ...
I1013 21:19:32.956188   33144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.956227   33144 out.go:374] Setting ErrFile to fd 2...
I1013 21:19:32.956250   33144 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.956532   33144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
I1013 21:19:32.957244   33144 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.957423   33144 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.958051   33144 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
I1013 21:19:32.986883   33144 ssh_runner.go:195] Run: systemctl --version
I1013 21:19:32.986935   33144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
I1013 21:19:33.005902   33144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
I1013 21:19:33.114831   33144 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-192425 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["
gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9"
,"repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"b1a8c6
f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k
8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6
473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-192425 image ls --format json --alsologtostderr:
I1013 21:19:32.687195   33077 out.go:360] Setting OutFile to fd 1 ...
I1013 21:19:32.687520   33077 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.687535   33077 out.go:374] Setting ErrFile to fd 2...
I1013 21:19:32.687541   33077 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.687925   33077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
I1013 21:19:32.688755   33077 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.688976   33077 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.689554   33077 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
I1013 21:19:32.710492   33077 ssh_runner.go:195] Run: systemctl --version
I1013 21:19:32.710546   33077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
I1013 21:19:32.747092   33077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
I1013 21:19:32.850490   33077 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-192425 image ls --format yaml --alsologtostderr:
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-192425 image ls --format yaml --alsologtostderr:
I1013 21:19:32.198184   32946 out.go:360] Setting OutFile to fd 1 ...
I1013 21:19:32.198358   32946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.198390   32946 out.go:374] Setting ErrFile to fd 2...
I1013 21:19:32.198411   32946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.198671   32946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
I1013 21:19:32.199310   32946 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.199470   32946 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.200039   32946 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
I1013 21:19:32.219461   32946 ssh_runner.go:195] Run: systemctl --version
I1013 21:19:32.219510   32946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
I1013 21:19:32.239346   32946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
I1013 21:19:32.346297   32946 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-192425 ssh pgrep buildkitd: exit status 1 (299.370943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image build -t localhost/my-image:functional-192425 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-192425 image build -t localhost/my-image:functional-192425 testdata/build --alsologtostderr: (3.394379972s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-192425 image build -t localhost/my-image:functional-192425 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fe201693e84
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-192425
--> aa066961b99
Successfully tagged localhost/my-image:functional-192425
aa066961b996a02bf266a050c6ae7868e5c89c1b76be3029641bb91ac90424dd
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-192425 image build -t localhost/my-image:functional-192425 testdata/build --alsologtostderr:
I1013 21:19:32.749995   33082 out.go:360] Setting OutFile to fd 1 ...
I1013 21:19:32.750209   33082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.750237   33082 out.go:374] Setting ErrFile to fd 2...
I1013 21:19:32.750255   33082 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1013 21:19:32.750571   33082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
I1013 21:19:32.751217   33082 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.752073   33082 config.go:182] Loaded profile config "functional-192425": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1013 21:19:32.752618   33082 cli_runner.go:164] Run: docker container inspect functional-192425 --format={{.State.Status}}
I1013 21:19:32.778498   33082 ssh_runner.go:195] Run: systemctl --version
I1013 21:19:32.778551   33082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-192425
I1013 21:19:32.801240   33082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/functional-192425/id_rsa Username:docker}
I1013 21:19:32.903501   33082 build_images.go:161] Building image from path: /tmp/build.2069924948.tar
I1013 21:19:32.903563   33082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1013 21:19:32.913442   33082 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2069924948.tar
I1013 21:19:32.917923   33082 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2069924948.tar: stat -c "%s %y" /var/lib/minikube/build/build.2069924948.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2069924948.tar': No such file or directory
I1013 21:19:32.917953   33082 ssh_runner.go:362] scp /tmp/build.2069924948.tar --> /var/lib/minikube/build/build.2069924948.tar (3072 bytes)
I1013 21:19:32.939275   33082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2069924948
I1013 21:19:32.948225   33082 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2069924948 -xf /var/lib/minikube/build/build.2069924948.tar
I1013 21:19:32.958122   33082 crio.go:315] Building image: /var/lib/minikube/build/build.2069924948
I1013 21:19:32.958184   33082 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-192425 /var/lib/minikube/build/build.2069924948 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1013 21:19:36.051301   33082 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-192425 /var/lib/minikube/build/build.2069924948 --cgroup-manager=cgroupfs: (3.093090363s)
I1013 21:19:36.051372   33082 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2069924948
I1013 21:19:36.059206   33082 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2069924948.tar
I1013 21:19:36.066992   33082 build_images.go:217] Built localhost/my-image:functional-192425 from /tmp/build.2069924948.tar
I1013 21:19:36.067021   33082 build_images.go:133] succeeded building to: functional-192425
I1013 21:19:36.067026   33082 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-192425
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image rm kicbase/echo-server:functional-192425 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-192425 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-192425
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-192425
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-192425
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1013 21:21:50.818452    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m22.252432462s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 kubectl -- rollout status deployment/busybox: (3.677534281s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-4g8kx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-v2lmk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-zm4dz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-4g8kx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-v2lmk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-zm4dz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-4g8kx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-v2lmk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-zm4dz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-4g8kx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-4g8kx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-v2lmk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-v2lmk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-zm4dz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 kubectl -- exec busybox-7b57f96db7-zm4dz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 node add --alsologtostderr -v 5
E1013 21:23:13.892714    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.249204    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.255720    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.267112    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.288505    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.329874    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.411308    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.572684    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:51.894296    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:52.536299    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:53.817632    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:23:56.379937    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:24:01.502289    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 node add --alsologtostderr -v 5: (55.755446581s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5: (1.069098394s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-028437 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.001627676s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 status --output json --alsologtostderr -v 5: (1.007137823s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp testdata/cp-test.txt ha-028437:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1336474250/001/cp-test_ha-028437.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437:/home/docker/cp-test.txt ha-028437-m02:/home/docker/cp-test_ha-028437_ha-028437-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test_ha-028437_ha-028437-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437:/home/docker/cp-test.txt ha-028437-m03:/home/docker/cp-test_ha-028437_ha-028437-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test.txt"
E1013 21:24:11.743687    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test_ha-028437_ha-028437-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437:/home/docker/cp-test.txt ha-028437-m04:/home/docker/cp-test_ha-028437_ha-028437-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test_ha-028437_ha-028437-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp testdata/cp-test.txt ha-028437-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1336474250/001/cp-test_ha-028437-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m02:/home/docker/cp-test.txt ha-028437:/home/docker/cp-test_ha-028437-m02_ha-028437.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test_ha-028437-m02_ha-028437.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m02:/home/docker/cp-test.txt ha-028437-m03:/home/docker/cp-test_ha-028437-m02_ha-028437-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test_ha-028437-m02_ha-028437-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m02:/home/docker/cp-test.txt ha-028437-m04:/home/docker/cp-test_ha-028437-m02_ha-028437-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test_ha-028437-m02_ha-028437-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp testdata/cp-test.txt ha-028437-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1336474250/001/cp-test_ha-028437-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m03:/home/docker/cp-test.txt ha-028437:/home/docker/cp-test_ha-028437-m03_ha-028437.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test_ha-028437-m03_ha-028437.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m03:/home/docker/cp-test.txt ha-028437-m02:/home/docker/cp-test_ha-028437-m03_ha-028437-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test_ha-028437-m03_ha-028437-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m03:/home/docker/cp-test.txt ha-028437-m04:/home/docker/cp-test_ha-028437-m03_ha-028437-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test_ha-028437-m03_ha-028437-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp testdata/cp-test.txt ha-028437-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1336474250/001/cp-test_ha-028437-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m04:/home/docker/cp-test.txt ha-028437:/home/docker/cp-test_ha-028437-m04_ha-028437.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437 "sudo cat /home/docker/cp-test_ha-028437-m04_ha-028437.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m04:/home/docker/cp-test.txt ha-028437-m02:/home/docker/cp-test_ha-028437-m04_ha-028437-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m02 "sudo cat /home/docker/cp-test_ha-028437-m04_ha-028437-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 cp ha-028437-m04:/home/docker/cp-test.txt ha-028437-m03:/home/docker/cp-test_ha-028437-m04_ha-028437-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 ssh -n ha-028437-m03 "sudo cat /home/docker/cp-test_ha-028437-m04_ha-028437-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 node stop m02 --alsologtostderr -v 5
E1013 21:24:32.224978    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 node stop m02 --alsologtostderr -v 5: (11.88097484s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5: exit status 7 (792.219628ms)

                                                
                                                
-- stdout --
	ha-028437
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-028437-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-028437-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-028437-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:24:38.861412   47916 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:24:38.861543   47916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:24:38.861569   47916 out.go:374] Setting ErrFile to fd 2...
	I1013 21:24:38.861579   47916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:24:38.862477   47916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:24:38.868768   47916 out.go:368] Setting JSON to false
	I1013 21:24:38.868828   47916 mustload.go:65] Loading cluster: ha-028437
	I1013 21:24:38.868902   47916 notify.go:220] Checking for updates...
	I1013 21:24:38.869870   47916 config.go:182] Loaded profile config "ha-028437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:24:38.869893   47916 status.go:174] checking status of ha-028437 ...
	I1013 21:24:38.870396   47916 cli_runner.go:164] Run: docker container inspect ha-028437 --format={{.State.Status}}
	I1013 21:24:38.894969   47916 status.go:371] ha-028437 host status = "Running" (err=<nil>)
	I1013 21:24:38.894990   47916 host.go:66] Checking if "ha-028437" exists ...
	I1013 21:24:38.895439   47916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-028437
	I1013 21:24:38.927034   47916 host.go:66] Checking if "ha-028437" exists ...
	I1013 21:24:38.927432   47916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:24:38.927576   47916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-028437
	I1013 21:24:38.946311   47916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/ha-028437/id_rsa Username:docker}
	I1013 21:24:39.045669   47916 ssh_runner.go:195] Run: systemctl --version
	I1013 21:24:39.052212   47916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:24:39.065698   47916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:24:39.147380   47916 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-13 21:24:39.136473736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:24:39.148021   47916 kubeconfig.go:125] found "ha-028437" server: "https://192.168.49.254:8443"
	I1013 21:24:39.148070   47916 api_server.go:166] Checking apiserver status ...
	I1013 21:24:39.148125   47916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:24:39.160524   47916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	I1013 21:24:39.169999   47916 api_server.go:182] apiserver freezer: "2:freezer:/docker/239769bc4ee168b05e73bfd13c3ee0b0a95c097a268103b9034523f6d15ff7b4/crio/crio-3533b9a24af9bd94a969188c75b38339e73c8c1213f9589f8f276d42bd4aaa14"
	I1013 21:24:39.170074   47916 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/239769bc4ee168b05e73bfd13c3ee0b0a95c097a268103b9034523f6d15ff7b4/crio/crio-3533b9a24af9bd94a969188c75b38339e73c8c1213f9589f8f276d42bd4aaa14/freezer.state
	I1013 21:24:39.179387   47916 api_server.go:204] freezer state: "THAWED"
	I1013 21:24:39.179414   47916 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 21:24:39.188454   47916 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 21:24:39.188529   47916 status.go:463] ha-028437 apiserver status = Running (err=<nil>)
	I1013 21:24:39.188555   47916 status.go:176] ha-028437 status: &{Name:ha-028437 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:24:39.188594   47916 status.go:174] checking status of ha-028437-m02 ...
	I1013 21:24:39.188940   47916 cli_runner.go:164] Run: docker container inspect ha-028437-m02 --format={{.State.Status}}
	I1013 21:24:39.205148   47916 status.go:371] ha-028437-m02 host status = "Stopped" (err=<nil>)
	I1013 21:24:39.205168   47916 status.go:384] host is not running, skipping remaining checks
	I1013 21:24:39.205175   47916 status.go:176] ha-028437-m02 status: &{Name:ha-028437-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:24:39.205193   47916 status.go:174] checking status of ha-028437-m03 ...
	I1013 21:24:39.205503   47916 cli_runner.go:164] Run: docker container inspect ha-028437-m03 --format={{.State.Status}}
	I1013 21:24:39.224784   47916 status.go:371] ha-028437-m03 host status = "Running" (err=<nil>)
	I1013 21:24:39.224807   47916 host.go:66] Checking if "ha-028437-m03" exists ...
	I1013 21:24:39.225272   47916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-028437-m03
	I1013 21:24:39.243016   47916 host.go:66] Checking if "ha-028437-m03" exists ...
	I1013 21:24:39.243331   47916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:24:39.243370   47916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-028437-m03
	I1013 21:24:39.261860   47916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/ha-028437-m03/id_rsa Username:docker}
	I1013 21:24:39.361262   47916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:24:39.375643   47916 kubeconfig.go:125] found "ha-028437" server: "https://192.168.49.254:8443"
	I1013 21:24:39.375733   47916 api_server.go:166] Checking apiserver status ...
	I1013 21:24:39.375873   47916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:24:39.387497   47916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	I1013 21:24:39.397552   47916 api_server.go:182] apiserver freezer: "2:freezer:/docker/37d482b00d7c4926770efe8c25c6a1fe5d01c8dba6e8928fdf2620719cd14716/crio/crio-2f8884b67479540b4c248fa04babd6c1a8d52d2e63d453eafb4331e2cb5fd6ea"
	I1013 21:24:39.397636   47916 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/37d482b00d7c4926770efe8c25c6a1fe5d01c8dba6e8928fdf2620719cd14716/crio/crio-2f8884b67479540b4c248fa04babd6c1a8d52d2e63d453eafb4331e2cb5fd6ea/freezer.state
	I1013 21:24:39.406265   47916 api_server.go:204] freezer state: "THAWED"
	I1013 21:24:39.406295   47916 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1013 21:24:39.415004   47916 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1013 21:24:39.415082   47916 status.go:463] ha-028437-m03 apiserver status = Running (err=<nil>)
	I1013 21:24:39.415108   47916 status.go:176] ha-028437-m03 status: &{Name:ha-028437-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:24:39.415148   47916 status.go:174] checking status of ha-028437-m04 ...
	I1013 21:24:39.415482   47916 cli_runner.go:164] Run: docker container inspect ha-028437-m04 --format={{.State.Status}}
	I1013 21:24:39.433886   47916 status.go:371] ha-028437-m04 host status = "Running" (err=<nil>)
	I1013 21:24:39.433914   47916 host.go:66] Checking if "ha-028437-m04" exists ...
	I1013 21:24:39.434205   47916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-028437-m04
	I1013 21:24:39.455814   47916 host.go:66] Checking if "ha-028437-m04" exists ...
	I1013 21:24:39.456114   47916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:24:39.456159   47916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-028437-m04
	I1013 21:24:39.479696   47916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/ha-028437-m04/id_rsa Username:docker}
	I1013 21:24:39.585356   47916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:24:39.599336   47916 status.go:176] ha-028437-m04 status: &{Name:ha-028437-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 node start m02 --alsologtostderr -v 5
E1013 21:25:13.187420    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 node start m02 --alsologtostderr -v 5: (33.655652247s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5: (1.296146427s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.201212508s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 stop --alsologtostderr -v 5: (36.82128476s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 start --wait true --alsologtostderr -v 5
E1013 21:26:35.109540    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:26:50.818492    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 start --wait true --alsologtostderr -v 5: (1m30.166186429s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 node delete m03 --alsologtostderr -v 5: (10.686180432s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 stop --alsologtostderr -v 5: (35.696014384s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5: exit status 7 (115.268326ms)

                                                
                                                
-- stdout --
	ha-028437
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-028437-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-028437-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:28:12.108017   59669 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:28:12.108133   59669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:28:12.108145   59669 out.go:374] Setting ErrFile to fd 2...
	I1013 21:28:12.108149   59669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:28:12.108503   59669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:28:12.108698   59669 out.go:368] Setting JSON to false
	I1013 21:28:12.108741   59669 mustload.go:65] Loading cluster: ha-028437
	I1013 21:28:12.108816   59669 notify.go:220] Checking for updates...
	I1013 21:28:12.109723   59669 config.go:182] Loaded profile config "ha-028437": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:28:12.109747   59669 status.go:174] checking status of ha-028437 ...
	I1013 21:28:12.110299   59669 cli_runner.go:164] Run: docker container inspect ha-028437 --format={{.State.Status}}
	I1013 21:28:12.128911   59669 status.go:371] ha-028437 host status = "Stopped" (err=<nil>)
	I1013 21:28:12.128932   59669 status.go:384] host is not running, skipping remaining checks
	I1013 21:28:12.128939   59669 status.go:176] ha-028437 status: &{Name:ha-028437 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:28:12.128967   59669 status.go:174] checking status of ha-028437-m02 ...
	I1013 21:28:12.129287   59669 cli_runner.go:164] Run: docker container inspect ha-028437-m02 --format={{.State.Status}}
	I1013 21:28:12.157256   59669 status.go:371] ha-028437-m02 host status = "Stopped" (err=<nil>)
	I1013 21:28:12.157274   59669 status.go:384] host is not running, skipping remaining checks
	I1013 21:28:12.157281   59669 status.go:176] ha-028437-m02 status: &{Name:ha-028437-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:28:12.157299   59669 status.go:174] checking status of ha-028437-m04 ...
	I1013 21:28:12.157595   59669 cli_runner.go:164] Run: docker container inspect ha-028437-m04 --format={{.State.Status}}
	I1013 21:28:12.174524   59669 status.go:371] ha-028437-m04 host status = "Stopped" (err=<nil>)
	I1013 21:28:12.174548   59669 status.go:384] host is not running, skipping remaining checks
	I1013 21:28:12.174554   59669 status.go:176] ha-028437-m04 status: &{Name:ha-028437-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (75.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1013 21:28:51.248917    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:29:18.951451    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m14.079898515s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (75.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 node add --control-plane --alsologtostderr -v 5: (1m23.116350788s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-028437 status --alsologtostderr -v 5: (1.02193774s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.013755957s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-555478 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1013 21:31:50.819384    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-555478 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.663769771s)
--- PASS: TestJSONOutput/start/Command (78.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-555478 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-555478 --output=json --user=testUser: (5.694722128s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-334460 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-334460 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (102.997779ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed049cb8-c3f6-4ccf-a92e-cbbab6925ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-334460] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"001fc2b6-8fe4-48b3-a724-28bd0f65538f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"ec63c28a-b699-4cc6-96b1-8508e74e464b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80986836-52c2-4631-be1a-a2d6b8720735","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig"}}
	{"specversion":"1.0","id":"124c88d1-fcbf-4890-8a9f-ad755023c492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube"}}
	{"specversion":"1.0","id":"78f59c6d-47c6-4a05-a193-55439822d3a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fcf6716e-73c1-4bbe-bcbb-c0029d8552c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"589d849b-cc0c-4529-9971-0edc6e49d0f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-334460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-334460
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (47.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-506774 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-506774 --network=: (45.185305189s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-506774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-506774
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-506774: (2.014123059s)
--- PASS: TestKicCustomNetwork/create_custom_network (47.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-352465 --network=bridge
E1013 21:33:51.249457    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-352465 --network=bridge: (34.498599639s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-352465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-352465
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-352465: (2.007664521s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.53s)

                                                
                                    
x
+
TestKicExistingNetwork (38.22s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1013 21:33:58.994239    4299 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1013 21:33:59.012142    4299 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1013 21:33:59.012220    4299 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1013 21:33:59.012238    4299 cli_runner.go:164] Run: docker network inspect existing-network
W1013 21:33:59.029068    4299 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1013 21:33:59.029107    4299 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1013 21:33:59.029121    4299 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1013 21:33:59.029219    4299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1013 21:33:59.045259    4299 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-95647f6063f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2e:3d:b3:ce:26:60} reservation:<nil>}
I1013 21:33:59.045521    4299 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40002d37c0}
I1013 21:33:59.045540    4299 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1013 21:33:59.045585    4299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1013 21:33:59.101805    4299 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-829105 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-829105 --network=existing-network: (36.086532475s)
helpers_test.go:175: Cleaning up "existing-network-829105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-829105
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-829105: (1.989370709s)
I1013 21:34:37.201097    4299 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.22s)

                                                
                                    
x
+
TestKicCustomSubnet (35.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-542778 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-542778 --subnet=192.168.60.0/24: (33.342731296s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-542778 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-542778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-542778
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-542778: (2.124870434s)
--- PASS: TestKicCustomSubnet (35.49s)

                                                
                                    
x
+
TestKicStaticIP (39.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-594748 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-594748 --static-ip=192.168.200.200: (37.604402613s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-594748 ip
helpers_test.go:175: Cleaning up "static-ip-594748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-594748
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-594748: (2.090406913s)
--- PASS: TestKicStaticIP (39.89s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-052668 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-052668 --driver=docker  --container-runtime=crio: (33.292120459s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-055491 --driver=docker  --container-runtime=crio
E1013 21:36:50.826506    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-055491 --driver=docker  --container-runtime=crio: (36.833015047s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-052668
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-055491
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-055491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-055491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-055491: (1.909572195s)
helpers_test.go:175: Cleaning up "first-052668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-052668
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-052668: (1.934870924s)
--- PASS: TestMinikubeProfile (75.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-021352 --memory=3072 --mount-string /tmp/TestMountStartserial3476221644/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-021352 --memory=3072 --mount-string /tmp/TestMountStartserial3476221644/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.435406103s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-021352 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-023189 --memory=3072 --mount-string /tmp/TestMountStartserial3476221644/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-023189 --memory=3072 --mount-string /tmp/TestMountStartserial3476221644/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.451426248s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-023189 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-021352 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-021352 --alsologtostderr -v=5: (1.609497219s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-023189 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-023189
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-023189: (1.206024551s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-023189
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-023189: (6.790149446s)
--- PASS: TestMountStart/serial/RestartStopped (7.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-023189 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-604834 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1013 21:38:51.249122    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 21:39:53.895095    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-604834 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.628264784s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-604834 -- rollout status deployment/busybox: (3.347508657s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-bnlg8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-f4fp8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-bnlg8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-f4fp8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-bnlg8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-f4fp8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-bnlg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-bnlg8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-f4fp8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-604834 -- exec busybox-7b57f96db7-f4fp8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-604834 -v=5 --alsologtostderr
E1013 21:40:14.313176    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-604834 -v=5 --alsologtostderr: (54.322544616s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-604834 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp testdata/cp-test.txt multinode-604834:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile516259210/001/cp-test_multinode-604834.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834:/home/docker/cp-test.txt multinode-604834-m02:/home/docker/cp-test_multinode-604834_multinode-604834-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m02 "sudo cat /home/docker/cp-test_multinode-604834_multinode-604834-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834:/home/docker/cp-test.txt multinode-604834-m03:/home/docker/cp-test_multinode-604834_multinode-604834-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m03 "sudo cat /home/docker/cp-test_multinode-604834_multinode-604834-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp testdata/cp-test.txt multinode-604834-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile516259210/001/cp-test_multinode-604834-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834-m02:/home/docker/cp-test.txt multinode-604834:/home/docker/cp-test_multinode-604834-m02_multinode-604834.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834 "sudo cat /home/docker/cp-test_multinode-604834-m02_multinode-604834.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834-m02:/home/docker/cp-test.txt multinode-604834-m03:/home/docker/cp-test_multinode-604834-m02_multinode-604834-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m03 "sudo cat /home/docker/cp-test_multinode-604834-m02_multinode-604834-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp testdata/cp-test.txt multinode-604834-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile516259210/001/cp-test_multinode-604834-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834-m03:/home/docker/cp-test.txt multinode-604834:/home/docker/cp-test_multinode-604834-m03_multinode-604834.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834 "sudo cat /home/docker/cp-test_multinode-604834-m03_multinode-604834.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 cp multinode-604834-m03:/home/docker/cp-test.txt multinode-604834-m02:/home/docker/cp-test_multinode-604834-m03_multinode-604834-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 ssh -n multinode-604834-m02 "sudo cat /home/docker/cp-test_multinode-604834-m03_multinode-604834-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-604834 node stop m03: (1.218494999s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-604834 status: exit status 7 (513.458443ms)

                                                
                                                
-- stdout --
	multinode-604834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-604834-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-604834-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr: exit status 7 (549.895216ms)

                                                
                                                
-- stdout --
	multinode-604834
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-604834-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-604834-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:41:13.614980  109960 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:41:13.615089  109960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:41:13.615100  109960 out.go:374] Setting ErrFile to fd 2...
	I1013 21:41:13.615105  109960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:41:13.615364  109960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:41:13.615550  109960 out.go:368] Setting JSON to false
	I1013 21:41:13.615593  109960 mustload.go:65] Loading cluster: multinode-604834
	I1013 21:41:13.615661  109960 notify.go:220] Checking for updates...
	I1013 21:41:13.616914  109960 config.go:182] Loaded profile config "multinode-604834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:41:13.617057  109960 status.go:174] checking status of multinode-604834 ...
	I1013 21:41:13.617815  109960 cli_runner.go:164] Run: docker container inspect multinode-604834 --format={{.State.Status}}
	I1013 21:41:13.640920  109960 status.go:371] multinode-604834 host status = "Running" (err=<nil>)
	I1013 21:41:13.640944  109960 host.go:66] Checking if "multinode-604834" exists ...
	I1013 21:41:13.641227  109960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-604834
	I1013 21:41:13.679335  109960 host.go:66] Checking if "multinode-604834" exists ...
	I1013 21:41:13.679651  109960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:41:13.679699  109960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-604834
	I1013 21:41:13.701356  109960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32906 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/multinode-604834/id_rsa Username:docker}
	I1013 21:41:13.801297  109960 ssh_runner.go:195] Run: systemctl --version
	I1013 21:41:13.807588  109960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:41:13.820178  109960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:41:13.884455  109960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-13 21:41:13.875542477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:41:13.885010  109960 kubeconfig.go:125] found "multinode-604834" server: "https://192.168.67.2:8443"
	I1013 21:41:13.885042  109960 api_server.go:166] Checking apiserver status ...
	I1013 21:41:13.885092  109960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1013 21:41:13.897614  109960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	I1013 21:41:13.906264  109960 api_server.go:182] apiserver freezer: "2:freezer:/docker/cd298652927adfd5ac271cf24584240d51d4fb3d8aa4679716b52b1250638e86/crio/crio-1f738a51646fef47ad6c2be20a2c81b1b63d909d6178107632ad2861f246456e"
	I1013 21:41:13.906339  109960 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cd298652927adfd5ac271cf24584240d51d4fb3d8aa4679716b52b1250638e86/crio/crio-1f738a51646fef47ad6c2be20a2c81b1b63d909d6178107632ad2861f246456e/freezer.state
	I1013 21:41:13.913469  109960 api_server.go:204] freezer state: "THAWED"
	I1013 21:41:13.913496  109960 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1013 21:41:13.921694  109960 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1013 21:41:13.921723  109960 status.go:463] multinode-604834 apiserver status = Running (err=<nil>)
	I1013 21:41:13.921734  109960 status.go:176] multinode-604834 status: &{Name:multinode-604834 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:41:13.921750  109960 status.go:174] checking status of multinode-604834-m02 ...
	I1013 21:41:13.922040  109960 cli_runner.go:164] Run: docker container inspect multinode-604834-m02 --format={{.State.Status}}
	I1013 21:41:13.938901  109960 status.go:371] multinode-604834-m02 host status = "Running" (err=<nil>)
	I1013 21:41:13.938926  109960 host.go:66] Checking if "multinode-604834-m02" exists ...
	I1013 21:41:13.939230  109960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-604834-m02
	I1013 21:41:13.959899  109960 host.go:66] Checking if "multinode-604834-m02" exists ...
	I1013 21:41:13.960198  109960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1013 21:41:13.960235  109960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-604834-m02
	I1013 21:41:13.981723  109960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32911 SSHKeyPath:/home/jenkins/minikube-integration/21724-2495/.minikube/machines/multinode-604834-m02/id_rsa Username:docker}
	I1013 21:41:14.080948  109960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1013 21:41:14.093592  109960 status.go:176] multinode-604834-m02 status: &{Name:multinode-604834-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:41:14.093695  109960 status.go:174] checking status of multinode-604834-m03 ...
	I1013 21:41:14.094012  109960 cli_runner.go:164] Run: docker container inspect multinode-604834-m03 --format={{.State.Status}}
	I1013 21:41:14.111225  109960 status.go:371] multinode-604834-m03 host status = "Stopped" (err=<nil>)
	I1013 21:41:14.111251  109960 status.go:384] host is not running, skipping remaining checks
	I1013 21:41:14.111258  109960 status.go:176] multinode-604834-m03 status: &{Name:multinode-604834-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-604834 node start m03 -v=5 --alsologtostderr: (7.272319052s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-604834
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-604834
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-604834: (24.710751992s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-604834 --wait=true -v=5 --alsologtostderr
E1013 21:41:50.818899    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-604834 --wait=true -v=5 --alsologtostderr: (53.48809301s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-604834
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-604834 node delete m03: (4.941524027s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-604834 stop: (23.527136076s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-604834 status: exit status 7 (88.469119ms)

                                                
                                                
-- stdout --
	multinode-604834
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-604834-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr: exit status 7 (100.554399ms)

                                                
                                                
-- stdout --
	multinode-604834
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-604834-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:43:09.826951  117767 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:43:09.827058  117767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:43:09.827069  117767 out.go:374] Setting ErrFile to fd 2...
	I1013 21:43:09.827075  117767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:43:09.827353  117767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:43:09.827541  117767 out.go:368] Setting JSON to false
	I1013 21:43:09.827588  117767 mustload.go:65] Loading cluster: multinode-604834
	I1013 21:43:09.827658  117767 notify.go:220] Checking for updates...
	I1013 21:43:09.828908  117767 config.go:182] Loaded profile config "multinode-604834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:43:09.829076  117767 status.go:174] checking status of multinode-604834 ...
	I1013 21:43:09.830302  117767 cli_runner.go:164] Run: docker container inspect multinode-604834 --format={{.State.Status}}
	I1013 21:43:09.847726  117767 status.go:371] multinode-604834 host status = "Stopped" (err=<nil>)
	I1013 21:43:09.847747  117767 status.go:384] host is not running, skipping remaining checks
	I1013 21:43:09.847753  117767 status.go:176] multinode-604834 status: &{Name:multinode-604834 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1013 21:43:09.847806  117767 status.go:174] checking status of multinode-604834-m02 ...
	I1013 21:43:09.848107  117767 cli_runner.go:164] Run: docker container inspect multinode-604834-m02 --format={{.State.Status}}
	I1013 21:43:09.878282  117767 status.go:371] multinode-604834-m02 host status = "Stopped" (err=<nil>)
	I1013 21:43:09.878303  117767 status.go:384] host is not running, skipping remaining checks
	I1013 21:43:09.878310  117767 status.go:176] multinode-604834-m02 status: &{Name:multinode-604834-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-604834 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1013 21:43:51.249453    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-604834 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.382280195s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-604834 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.09s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-604834
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-604834-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-604834-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.338085ms)

                                                
                                                
-- stdout --
	* [multinode-604834-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-604834-m02' is duplicated with machine name 'multinode-604834-m02' in profile 'multinode-604834'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-604834-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-604834-m03 --driver=docker  --container-runtime=crio: (34.264309505s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-604834
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-604834: exit status 80 (717.314823ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-604834 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-604834-m03 already exists in multinode-604834-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-604834-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-604834-m03: (1.926670595s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.05s)

                                                
                                    
x
+
TestPreload (132.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-942796 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-942796 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.57036927s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-942796 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-942796 image pull gcr.io/k8s-minikube/busybox: (2.017326458s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-942796
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-942796: (5.804511736s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-942796 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-942796 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (59.546110082s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-942796 image list
helpers_test.go:175: Cleaning up "test-preload-942796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-942796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-942796: (2.259341489s)
--- PASS: TestPreload (132.43s)

                                                
                                    
x
+
TestScheduledStopUnix (110.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-735181 --memory=3072 --driver=docker  --container-runtime=crio
E1013 21:46:50.818676    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-735181 --memory=3072 --driver=docker  --container-runtime=crio: (33.711484142s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-735181 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-735181 -n scheduled-stop-735181
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-735181 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1013 21:47:24.573253    4299 retry.go:31] will retry after 124.609µs: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.574465    4299 retry.go:31] will retry after 174.578µs: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.575625    4299 retry.go:31] will retry after 239.57µs: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.576743    4299 retry.go:31] will retry after 410.306µs: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.577865    4299 retry.go:31] will retry after 457.263µs: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.578982    4299 retry.go:31] will retry after 1.000363ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.580104    4299 retry.go:31] will retry after 1.521239ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.582286    4299 retry.go:31] will retry after 2.183092ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.585474    4299 retry.go:31] will retry after 1.434022ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.587671    4299 retry.go:31] will retry after 5.753655ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.593889    4299 retry.go:31] will retry after 6.231555ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.601124    4299 retry.go:31] will retry after 5.684329ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.607370    4299 retry.go:31] will retry after 7.252508ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.615599    4299 retry.go:31] will retry after 14.370789ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.630837    4299 retry.go:31] will retry after 25.381126ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.657059    4299 retry.go:31] will retry after 22.738302ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
I1013 21:47:24.681358    4299 retry.go:31] will retry after 81.692918ms: open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/scheduled-stop-735181/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-735181 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-735181 -n scheduled-stop-735181
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-735181
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-735181 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-735181
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-735181: exit status 7 (64.645176ms)

                                                
                                                
-- stdout --
	scheduled-stop-735181
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-735181 -n scheduled-stop-735181
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-735181 -n scheduled-stop-735181: exit status 7 (69.736483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-735181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-735181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-735181: (5.032063322s)
--- PASS: TestScheduledStopUnix (110.38s)

                                                
                                    
x
+
TestInsufficientStorage (11.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-779941 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-779941 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.178028313s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73c50733-ba2d-41b9-9225-f8d4ea3ed032","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-779941] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6dc71c47-e3e4-4d6f-9afc-0b9627ae0d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"35ec7261-6a1c-4cb7-800e-b121b2bb16b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f5aadc3-ba49-4aea-9374-489303c47dc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig"}}
	{"specversion":"1.0","id":"669a9280-8852-4315-8616-787e5777c5be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube"}}
	{"specversion":"1.0","id":"91d1cce5-ca04-426e-97cd-ce59aa43371b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5f678304-73df-40ae-8795-0bee65a04afb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a92c2e8b-6d21-415f-b564-d3db38a2a5e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"34b61cf6-fc5c-4b1e-9bc7-2968c710ac05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d8617c71-494b-4365-9a69-b4f0be3b1f7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5eb1d971-7149-4156-b689-368cf74cba6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c05db01b-ebb3-4b57-8529-ac66d0ba17e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-779941\" primary control-plane node in \"insufficient-storage-779941\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1def4d06-3f99-4ded-b356-90ceb8e48c68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f6cb6a5-c768-41c0-a776-50f83d8f17e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"39e2b88d-ac2c-46a7-8ced-8c4461fba8ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-779941 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-779941 --output=json --layout=cluster: exit status 7 (292.448794ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-779941","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-779941","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 21:48:50.211469  133898 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-779941" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-779941 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-779941 --output=json --layout=cluster: exit status 7 (290.576828ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-779941","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-779941","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1013 21:48:50.500304  133964 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-779941" does not appear in /home/jenkins/minikube-integration/21724-2495/kubeconfig
	E1013 21:48:50.510433  133964 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/insufficient-storage-779941/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-779941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-779941
E1013 21:48:51.249895    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-779941: (1.892125769s)
--- PASS: TestInsufficientStorage (11.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3869163086 start -p running-upgrade-601721 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3869163086 start -p running-upgrade-601721 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.127691086s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-601721 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-601721 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.653322067s)
helpers_test.go:175: Cleaning up "running-upgrade-601721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-601721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-601721: (2.394565465s)
--- PASS: TestRunningBinaryUpgrade (60.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (211.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.255440946s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-304765
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-304765: (1.341197876s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-304765 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-304765 status --format={{.Host}}: exit status 7 (94.127782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m13.11662982s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-304765 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (108.013866ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-304765] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-304765
	    minikube start -p kubernetes-upgrade-304765 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3047652 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-304765 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-304765 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.224878005s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-304765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-304765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-304765: (2.059348427s)
--- PASS: TestKubernetesUpgrade (211.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (122.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.698426619 start -p missing-upgrade-403510 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.698426619 start -p missing-upgrade-403510 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.709480323s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-403510
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-403510
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-403510 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-403510 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.737521268s)
helpers_test.go:175: Cleaning up "missing-upgrade-403510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-403510
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-403510: (2.226939498s)
--- PASS: TestMissingContainerUpgrade (122.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585265 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-585265 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (85.348007ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-585265] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585265 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585265 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.617359313s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-585265 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585265 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585265 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.839886454s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-585265 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-585265 status -o json: exit status 2 (382.545301ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-585265","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-585265
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-585265: (2.097733705s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585265 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585265 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.541749264s)
--- PASS: TestNoKubernetes/serial/Start (9.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-585265 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-585265 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.467932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-585265
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-585265: (1.239317567s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585265 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585265 --driver=docker  --container-runtime=crio: (7.586635249s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-585265 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-585265 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.523744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3996083030 start -p stopped-upgrade-014468 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3996083030 start -p stopped-upgrade-014468 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.5391795s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3996083030 -p stopped-upgrade-014468 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3996083030 -p stopped-upgrade-014468 stop: (1.236176724s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-014468 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1013 21:51:50.818416    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-014468 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.3643047s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-014468
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-014468: (1.174958217s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (83.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-609677 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-609677 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.205598758s)
--- PASS: TestPause/serial/Start (83.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-609677 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-609677 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.900724901s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-122822 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-122822 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (183.45881ms)

                                                
                                                
-- stdout --
	* [false-122822] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1013 21:55:02.891274  167801 out.go:360] Setting OutFile to fd 1 ...
	I1013 21:55:02.891517  167801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:55:02.891550  167801 out.go:374] Setting ErrFile to fd 2...
	I1013 21:55:02.891573  167801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1013 21:55:02.891944  167801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-2495/.minikube/bin
	I1013 21:55:02.892422  167801 out.go:368] Setting JSON to false
	I1013 21:55:02.893325  167801 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5837,"bootTime":1760386666,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1013 21:55:02.893422  167801 start.go:141] virtualization:  
	I1013 21:55:02.896898  167801 out.go:179] * [false-122822] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1013 21:55:02.900813  167801 out.go:179]   - MINIKUBE_LOCATION=21724
	I1013 21:55:02.900868  167801 notify.go:220] Checking for updates...
	I1013 21:55:02.904082  167801 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1013 21:55:02.907121  167801 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-2495/kubeconfig
	I1013 21:55:02.910063  167801 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-2495/.minikube
	I1013 21:55:02.913108  167801 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1013 21:55:02.916131  167801 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1013 21:55:02.919730  167801 config.go:182] Loaded profile config "force-systemd-flag-257205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1013 21:55:02.919899  167801 driver.go:421] Setting default libvirt URI to qemu:///system
	I1013 21:55:02.945853  167801 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1013 21:55:02.945983  167801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1013 21:55:03.006282  167801 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-13 21:55:02.995467227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1013 21:55:03.006411  167801 docker.go:318] overlay module found
	I1013 21:55:03.009927  167801 out.go:179] * Using the docker driver based on user configuration
	I1013 21:55:03.012910  167801 start.go:305] selected driver: docker
	I1013 21:55:03.012939  167801 start.go:925] validating driver "docker" against <nil>
	I1013 21:55:03.012953  167801 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1013 21:55:03.016819  167801 out.go:203] 
	W1013 21:55:03.019843  167801 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1013 21:55:03.022852  167801 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-122822 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-122822" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-122822

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-122822"

                                                
                                                
----------------------- debugLogs end: false-122822 [took: 3.232696579s] --------------------------------
helpers_test.go:175: Cleaning up "false-122822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-122822
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.574715788s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-061725 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [768e0ffa-efa2-4156-98c7-722ab5e3d117] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [768e0ffa-efa2-4156-98c7-722ab5e3d117] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003604954s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-061725 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-061725 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-061725 --alsologtostderr -v=3: (11.838420503s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725: exit status 7 (81.156953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-061725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-061725 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.221932491s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-061725 -n old-k8s-version-061725
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m15.258302504s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6zgml" [b62057b0-535c-46d1-87a0-f7e573c4b455] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004218251s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6zgml" [b62057b0-535c-46d1-87a0-f7e573c4b455] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004452352s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-061725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-061725 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.159380106s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-998398 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2606a914-28cd-4c36-8cc8-6609e307bd62] Pending
helpers_test.go:352: "busybox" [2606a914-28cd-4c36-8cc8-6609e307bd62] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2606a914-28cd-4c36-8cc8-6609e307bd62] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003551321s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-998398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-998398 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-998398 --alsologtostderr -v=3: (11.936617412s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398: exit status 7 (76.392573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-998398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-998398 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.891666636s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998398 -n no-preload-998398
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-251758 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e59e21ac-ac32-43ef-aebf-149407845f99] Pending
helpers_test.go:352: "busybox" [e59e21ac-ac32-43ef-aebf-149407845f99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e59e21ac-ac32-43ef-aebf-149407845f99] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00365825s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-251758 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-251758 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-251758 --alsologtostderr -v=3: (11.932721949s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jplsp" [4814b560-6eca-4988-86bf-4b885ba6f1f9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003326748s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jplsp" [4814b560-6eca-4988-86bf-4b885ba6f1f9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003589138s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-998398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758: exit status 7 (71.466249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-251758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-251758 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.583126168s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-251758 -n embed-certs-251758
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-998398 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.122932551s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-txgzm" [d92af1b4-675d-48d6-b1e5-f1e88ecad032] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003317089s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-txgzm" [d92af1b4-675d-48d6-b1e5-f1e88ecad032] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003109611s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-251758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-251758 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1013 22:10:20.577018    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:20.583351    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:20.594791    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:20.616144    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:20.657506    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:20.738900    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:20.900508    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:21.222473    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:21.863963    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:10:23.145548    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.03426313s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-007533 create -f testdata/busybox.yaml
E1013 22:10:25.707688    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [021a5a33-018f-4fda-8fd6-c390d49a3993] Pending
helpers_test.go:352: "busybox" [021a5a33-018f-4fda-8fd6-c390d49a3993] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [021a5a33-018f-4fda-8fd6-c390d49a3993] Running
E1013 22:10:30.831576    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003746209s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-007533 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-007533 --alsologtostderr -v=3
E1013 22:10:41.073862    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-007533 --alsologtostderr -v=3: (11.956502028s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-400889 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-400889 --alsologtostderr -v=3: (1.213513094s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889: exit status 7 (72.294893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-400889 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-400889 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (21.55773848s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-400889 -n newest-cni-400889
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533: exit status 7 (70.50636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-007533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1013 22:11:01.555111    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-007533 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.074718582s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-007533 -n default-k8s-diff-port-007533
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-400889 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1013 22:11:42.517027    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.036877413s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ktrdv" [ba9e1654-c75c-4cdc-bd62-40572b9c029b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009751497s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ktrdv" [ba9e1654-c75c-4cdc-bd62-40572b9c029b] Running
E1013 22:11:50.818444    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003998875s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-007533 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-007533 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1013 22:12:25.230573    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:25.236991    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:25.248366    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:25.269713    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:25.311078    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:25.392472    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:25.553944    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:25.875292    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:26.516759    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:27.798240    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:30.359948    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1013 22:12:35.482191    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.914486644s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-122822 "pgrep -a kubelet"
I1013 22:12:39.965365    4299 config.go:182] Loaded profile config "auto-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-122822 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bk94l" [02971a5d-9c72-4993-9504-e83cbd251b21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1013 22:12:45.723822    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bk94l" [02971a5d-9c72-4993-9504-e83cbd251b21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004082147s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-122822 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1013 22:13:13.897860    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/addons-421494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.095826879s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rmvvm" [3d553032-a990-4ee4-b596-a48dee94d918] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003941482s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-122822 "pgrep -a kubelet"
I1013 22:13:33.540634    4299 config.go:182] Loaded profile config "kindnet-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-122822 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zrbww" [8ef6a3c6-d91c-492e-983f-688cefe0a186] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1013 22:13:34.316734    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/functional-192425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zrbww" [8ef6a3c6-d91c-492e-983f-688cefe0a186] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003884014s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-122822 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.657667292s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vlx2h" [b1ab5d2f-f86d-4f0a-96f7-99db27c521ed] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003622676s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-122822 "pgrep -a kubelet"
I1013 22:14:22.768490    4299 config.go:182] Loaded profile config "calico-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-122822 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p5sv9" [cfad0bd9-0b64-4048-8538-321cd7a61c64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p5sv9" [cfad0bd9-0b64-4048-8538-321cd7a61c64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004240224s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-122822 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1013 22:15:09.090139    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/no-preload-998398/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.019672011s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-122822 "pgrep -a kubelet"
I1013 22:15:11.923701    4299 config.go:182] Loaded profile config "custom-flannel-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-122822 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7ts99" [725e0ac9-24ed-4b15-8e3f-24b43d78da76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7ts99" [725e0ac9-24ed-4b15-8e3f-24b43d78da76] Running
E1013 22:15:20.577391    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/old-k8s-version-061725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00462881s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-122822 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1013 22:16:06.756716    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/default-k8s-diff-port-007533/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.151113579s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-122822 "pgrep -a kubelet"
I1013 22:16:19.028076    4299 config.go:182] Loaded profile config "enable-default-cni-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-122822 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wk6vb" [5f0a16dd-a6f4-4e5e-b42f-b11f517a88f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wk6vb" [5f0a16dd-a6f4-4e5e-b42f-b11f517a88f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003634704s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-122822 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-122822 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m23.653033385s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-qjcw2" [fdc3fd38-f4b4-4c64-9c69-fc78d5008da4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003263568s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-122822 "pgrep -a kubelet"
I1013 22:16:58.499438    4299 config.go:182] Loaded profile config "flannel-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-122822 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4n7zk" [3527e2bf-df19-4291-bb02-5f3d70a98094] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4n7zk" [3527e2bf-df19-4291-bb02-5f3d70a98094] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003407082s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-122822 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-122822 "pgrep -a kubelet"
I1013 22:18:15.297491    4299 config.go:182] Loaded profile config "bridge-122822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-122822 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g9mvx" [7791f036-1a85-4a2c-b56f-d50eeec23bac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g9mvx" [7791f036-1a85-4a2c-b56f-d50eeec23bac] Running
E1013 22:18:21.274162    4299 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-2495/.minikube/profiles/auto-122822/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004147576s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-122822 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-122822 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-875751 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-875751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-875751
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-691681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-691681
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-122822 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-122822" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-122822

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-122822"

                                                
                                                
----------------------- debugLogs end: kubenet-122822 [took: 3.672055183s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-122822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-122822
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-122822 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-122822" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-122822

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-122822" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122822"

                                                
                                                
----------------------- debugLogs end: cilium-122822 [took: 3.58433338s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-122822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-122822
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
Copied to clipboard